Please use this identifier to cite or link to this item: http://hdl.handle.net/10263/7358
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChakraborty, Debasrita-
dc.date.accessioned2023-04-11T16:10:02Z-
dc.date.available2023-04-11T16:10:02Z-
dc.date.issued2022-12-
dc.identifier.citation244p.en_US
dc.identifier.urihttp://hdl.handle.net/10263/7358-
dc.descriptionThesis is under the supervision of Prof. Ashish Ghoshen_US
dc.description.abstractFeature extraction is a technique through which existing features are transformed into a different (usually smaller) dimension. This conceptually means that the data is rep- resented in a different aspect than the original one. This kind of data representation is among the key machine learning principles and often helps in finding some interesting relationships in the data. Detecting the structure and automated identification of pat- terns in datasets is indeed a suitable benefit as it facilitates understanding of the process described by the data. Hence, the effectiveness of machine learning algorithms vastly depends on the types of features they rely on due to the multi-dimensionality of infor- mation that feeds the model. Depending on how data is represented, different models may view the problem in different ways and try to solve it using unique techniques. Dif- ferent pattern recognition issues and challenges of classification have been improved by deep neural networks in recent years due to their inherent capability of learning from raw data. Deep architectures have also demonstrated their efficiency in recording latent data representation characteristics. Even though deep neural networks are well capable of handling complex data, the chal- lenges posed by imbalanced datasets, high dimensional datasets, highly chaotic time- varying datasets, or decentralised datasets are difficult to handle. Therefore, the main focus is on four such situations of complex datasets where standard deep neural networks fail. However, using an autoencoder and combining it with other techniques has proven to be beneficial in such conditions. In this thesis, the efficacy of autoencoders is argued in some really interesting areas. The investigations show that autoencoders are particularly useful for datasets where there is an imbalance, lack or absence of labelled samples and chaos. Thus, the successive chapters look into some application cases of the above sce- iii narios and explore how the autoencoder supplemented methods deal with the challenges in those applications. For the first task, a straightforward outlier detection problem is handled. It is seen that the autoencoders are very well capable of enhancing the performance of outlier detection models. So, the problem is extended to another use case where the data has somewhat of a local imbalance, high complexity and high dimensionality. This is observed for remotely sensed hyperspectral images where the task is to detect changes between such a pair of co-registered bi-temporal images. The tasks undertake cases where the label information is partially absent and completely absent respectively. It is observed that autoencoders are well suited to capture the changing neighborhood information surrounding the same pixel location of the two images. Autoencoders are also examined under conditions where the data is unpredictable as seen for OHLC (open high low close) stock prices. It is seen that transformation by autoencoders are much more informative than the original feature space. This is why an autoencoder supplemented prediction model helps in making better predictions about the future OHLC stock prices. Since the above methods are fairly cases of a centralised data setting, it was also necessary to examine how the autoencoders fair for decentralised imbalance. Thus, the efficacy of autoencoder is inspected under federated learning. It is seen that pre-training by autoencoders is particularly useful when the data is imbalanced and thus can be used for situations where the data distribution among the local nodes is non-i.i.d. Since autoencoders are unsupervised feature extractors, they do not incorporate any kind of class information during the training process. The study investigates if the use of such training of autoencoders lead to a competitive edge in performance.en_US
dc.language.isoenen_US
dc.publisherIndian Statistical Institute, Kolkataen_US
dc.relation.ispartofseriesISI Ph. D Thesis;TH-
dc.subjectPattern Recognitionen_US
dc.subjectAutoencodersen_US
dc.subjectTime-Series Predictionen_US
dc.subjectRestricted Boltzmann Machinen_US
dc.titleFeature Extraction using Autoencoders for Various Challenging Tasks of Pattern Recognitionen_US
dc.typeThesisen_US
Appears in Collections:Theses

Files in This Item:
File Description SizeFormat 
Thesis-Debashrita Chakraborty-16.1.23.pdfThesis5.33 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.