DSpace Repository

Can I Make My Deep Network Somewhat Explainable?

Show simple item record

dc.contributor.author Dangi, Mohan Kumar
dc.date.accessioned 2022-01-28T08:29:21Z
dc.date.available 2022-01-28T08:29:21Z
dc.date.issued 2019-07
dc.identifier.citation 34p. en_US
dc.identifier.uri http://hdl.handle.net/10263/7254
dc.description Dissertation under the supervision of Dr. Prof. Nikhil R. Pal en_US
dc.description.abstract Deep neural networks (DNNs), as well as shallow networks, are usually black boxes due to their nested non-linear structure. In other words, they provide no information about what exactly makes them arrive at their predictions/decisions. This lack of transparency can be a major drawback, particularly in critical applications, such as medicine, judiciary, and defense. Apart from this, almost all DNNs make a decision even when the test input is not from one of the classes for which they were trained or even when the test point is far from the training data used to design the system. In other words, such systems cannot say “don't know” when they should. In this work, we develop systems that can provide some explanations for their decisions and also can indicate when they should not make a decision. For this, we design DNNs for classification, which can classify an object and provide us with some explanation. For instance, if the network classifies an image, say a bird of kind Albatross, the network should provide some explanatory notes on why it has classified the image as an instance of Albatross. The explanation could be pieces of information that are distinguishing characteristics of Albatross. The system also detects situations when the inputs are not from the trained classes. To realize all these, we use four networks in an integrated manner: a pre-trained convolutional neural network (we use it as we do not have an adequate computing power to train from the scratch), two multilayer perceptron networks, and a self-organizing (feature) map. Each of these networks serves a distinctive purpose. ix en_US
dc.language.iso en en_US
dc.publisher Indian Statistical Institute, Kolkata en_US
dc.relation.ispartofseries Dissertation;;2019:8
dc.subject Explainable Artificial Intelligence (XAI) en_US
dc.subject Multilayer Perceptron (MLP) en_US
dc.title Can I Make My Deep Network Somewhat Explainable? en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account