DSpace Repository

Robust Inference using the Extended Bregman Divergence and Optimal Tuning Parameter Selection

Show simple item record

dc.contributor.author Basak, Sancharee
dc.date.accessioned 2022-09-20T09:01:00Z
dc.date.available 2022-09-20T09:01:00Z
dc.date.issued 2022-07
dc.identifier.citation 327p. en_US
dc.identifier.uri http://hdl.handle.net/10263/7346
dc.description Thesis is under the supervision of Prof. Ayanendranath Basu en_US
dc.description.abstract The inference procedure based on the minimization of statistical distances has already proved to be a very useful tool in the field of robust inference. One of such commonly used divergences is the Bregman Divergence. Several important divergence families, e.g., the Likelihood Dispar- ity (LD), the Density Power Divergence (DPD) family, the B-Exponential Divergence (BED) family etc. can be represented as subfamilies of the class of Bregman divergences. Yet, there are several other important divergences, e.g., the Power Divergence family, the S-divergence family, etc., which cannot be represented in the Bregman form. We will try to expand the structure of the Bregman divergence so that the above mentioned divergences can be accommodated within the Bregman form with this expanded definition. This we will do by utilizing powers of densities as arguments, rather than the arguments themselves; this leads to the generalized class of the extended Bregman divergences which is one step ahead through the modification of existing popular tools for minimum distance approach used extensively in this literature. Later, using this extension, we have explored the advantage of its use in the field of estimation by construct- ing a new divergence family, namely, the Generalized S-Bregman (GSB) family. Similarly, its contribution in the field of hypotheses testing has also been explored. But, in spite of such modification, sometimes we are not able to get the ‘best’ results due to another burning issue – choice of optimal tuning parameter(s). Inappropriate selection of it can sometimes lead us to dangerous consequences. The emphasis in present times is to find an ‘optimal’ data-based tuning parameter in the estimation process which generates an estimator which represents the best compromise between robustness and efficiency for the data at hand. Selecting this tuning parameter “optimally” is a problem of substantial practical interest, which we have also tried to address through the present work. The DPD has been used as a basic illustrative tool for this purpose. Here, we have tried to refine the attempts to select the optimal tuning parameter taken by Warwick and Jones (2005) as well as Hong and Kim (2001). We have proposed a modified algorithm, namely the Iterated Warwick and Jones (IWJ) algorithm, which helps us to find highly robust estimates along with reasonable efficiency, after removing the pilot dependency to a great extent. Several real life data examples have been used to demonstrate the success of our proposed algorithm. This method can potentially be applied in case of any robust estimation method which depends on the choice of tuning parameter(s). en_US
dc.language.iso en en_US
dc.publisher Indian Statistical Institute, Kolkata en_US
dc.relation.ispartofseries ISI Ph. D Thesis;TH559
dc.subject Bregman Divergence en_US
dc.subject Density Power Divergence en_US
dc.subject Generalised S-Divergence en_US
dc.subject Robust Inference en_US
dc.title Robust Inference using the Extended Bregman Divergence and Optimal Tuning Parameter Selection en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • Theses
    (ISI approved PhD theses)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account