Please use this identifier to cite or link to this item: http://hdl.handle.net/10263/7346
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBasak, Sancharee-
dc.date.accessioned2022-09-20T09:01:00Z-
dc.date.available2022-09-20T09:01:00Z-
dc.date.issued2022-07-
dc.identifier.citation327p.en_US
dc.identifier.urihttp://hdl.handle.net/10263/7346-
dc.descriptionThesis is under the supervision of Prof. Ayanendranath Basuen_US
dc.description.abstractThe inference procedure based on the minimization of statistical distances has already proved to be a very useful tool in the field of robust inference. One of such commonly used divergences is the Bregman Divergence. Several important divergence families, e.g., the Likelihood Dispar- ity (LD), the Density Power Divergence (DPD) family, the B-Exponential Divergence (BED) family etc. can be represented as subfamilies of the class of Bregman divergences. Yet, there are several other important divergences, e.g., the Power Divergence family, the S-divergence family, etc., which cannot be represented in the Bregman form. We will try to expand the structure of the Bregman divergence so that the above mentioned divergences can be accommodated within the Bregman form with this expanded definition. This we will do by utilizing powers of densities as arguments, rather than the arguments themselves; this leads to the generalized class of the extended Bregman divergences which is one step ahead through the modification of existing popular tools for minimum distance approach used extensively in this literature. Later, using this extension, we have explored the advantage of its use in the field of estimation by construct- ing a new divergence family, namely, the Generalized S-Bregman (GSB) family. Similarly, its contribution in the field of hypotheses testing has also been explored. But, in spite of such modification, sometimes we are not able to get the ‘best’ results due to another burning issue – choice of optimal tuning parameter(s). Inappropriate selection of it can sometimes lead us to dangerous consequences. The emphasis in present times is to find an ‘optimal’ data-based tuning parameter in the estimation process which generates an estimator which represents the best compromise between robustness and efficiency for the data at hand. Selecting this tuning parameter “optimally” is a problem of substantial practical interest, which we have also tried to address through the present work. The DPD has been used as a basic illustrative tool for this purpose. Here, we have tried to refine the attempts to select the optimal tuning parameter taken by Warwick and Jones (2005) as well as Hong and Kim (2001). We have proposed a modified algorithm, namely the Iterated Warwick and Jones (IWJ) algorithm, which helps us to find highly robust estimates along with reasonable efficiency, after removing the pilot dependency to a great extent. Several real life data examples have been used to demonstrate the success of our proposed algorithm. This method can potentially be applied in case of any robust estimation method which depends on the choice of tuning parameter(s).en_US
dc.language.isoenen_US
dc.publisherIndian Statistical Institute, Kolkataen_US
dc.relation.ispartofseriesISI Ph. D Thesis;TH559-
dc.subjectBregman Divergenceen_US
dc.subjectDensity Power Divergenceen_US
dc.subjectGeneralised S-Divergenceen_US
dc.subjectRobust Inferenceen_US
dc.titleRobust Inference using the Extended Bregman Divergence and Optimal Tuning Parameter Selectionen_US
dc.typeThesisen_US
Appears in Collections:Theses

Files in This Item:
File Description SizeFormat 
Thesis-sancharee Basak-Sept33.pdfThesis3.73 MBAdobe PDFView/Open
Sanchari.JPG365.82 kBJPEGView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.