Please use this identifier to cite or link to this item: http://hdl.handle.net/10263/7254
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDangi, Mohan Kumar-
dc.date.accessioned2022-01-28T08:29:21Z-
dc.date.available2022-01-28T08:29:21Z-
dc.date.issued2019-07-
dc.identifier.citation34p.en_US
dc.identifier.urihttp://hdl.handle.net/10263/7254-
dc.descriptionDissertation under the supervision of Dr. Prof. Nikhil R. Palen_US
dc.description.abstractDeep neural networks (DNNs), as well as shallow networks, are usually black boxes due to their nested non-linear structure. In other words, they provide no information about what exactly makes them arrive at their predictions/decisions. This lack of transparency can be a major drawback, particularly in critical applications, such as medicine, judiciary, and defense. Apart from this, almost all DNNs make a decision even when the test input is not from one of the classes for which they were trained or even when the test point is far from the training data used to design the system. In other words, such systems cannot say “don't know” when they should. In this work, we develop systems that can provide some explanations for their decisions and also can indicate when they should not make a decision. For this, we design DNNs for classification, which can classify an object and provide us with some explanation. For instance, if the network classifies an image, say a bird of kind Albatross, the network should provide some explanatory notes on why it has classified the image as an instance of Albatross. The explanation could be pieces of information that are distinguishing characteristics of Albatross. The system also detects situations when the inputs are not from the trained classes. To realize all these, we use four networks in an integrated manner: a pre-trained convolutional neural network (we use it as we do not have an adequate computing power to train from the scratch), two multilayer perceptron networks, and a self-organizing (feature) map. Each of these networks serves a distinctive purpose. ixen_US
dc.language.isoenen_US
dc.publisherIndian Statistical Institute, Kolkataen_US
dc.relation.ispartofseriesDissertation;;2019:8-
dc.subjectExplainable Artificial Intelligence (XAI)en_US
dc.subjectMultilayer Perceptron (MLP)en_US
dc.titleCan I Make My Deep Network Somewhat Explainable?en_US
dc.typeOtheren_US
Appears in Collections:Dissertations - M Tech (CS)

Files in This Item:
File Description SizeFormat 
Mohan_Thesis_Copy3.pdf5.92 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.