Online Public Access Catalogue (OPAC)
Library,Documentation and Information Science Division

“A research journal serves that narrow

borderland which separates the known from the unknown”

-P.C.Mahalanobis


Image from Google Jackets

Development of some Neural Network Models for Non-negative Matrix Factorization: Dimensionality Reduction/ Prasun Dutta

By: Material type: TextTextPublication details: KolKata: Indian Statistical Institute, 2025Description: xxiii, 204 pages, figs, tablesSubject(s): DDC classification:
  • 23rd 006.32 D979
Online resources:
Contents:
Introduction -- Description of datasets, experimental procedure and evaluators -- Non-negative Matrix Factorization Neural Network (n2MFn2) -- Deep Neural Network for Non-negative Matrix Factorization (DN3MF) -- Multiple Deconstruction Single Reconstruction Deep Neural Network Model for Non-negative Matrix Factorization (MDSR-NMF) -- Input Guided Multiple Deconstruction Single Reconstruction neural network for Non-negative Matrix Factorization (IG-MDSR-NMF) -- Input Guided Multiple Deconstruction Single Reconstruction neural network for Relaxed Non-negative Matrix -- Conclusions and Scope of Further Research Factorization (IG-MDSR-RNMF) -- Tables of p-values for n2MFn2 vs. others depicting classification performances
Production credits:
  • Guided by Prof. Rajat K. De
Dissertation note: Thesis (Ph.D). Indian Statistical Institute, 2025 Summary: Recent research has been driven by the abundance of data, leading to the develop- ment of systems that enhance understanding across various fields. Effective machine learning algorithms are crucial for managing high-dimensional data, with dimension reduction being a key strategy to improve algorithm efficiency and decision-making. Non-negative Matrix Factorization (NMF) stands out as a method that transforms large datasets into interpretable, lower-dimensional forms by decomposing a matrix with non-negative elements into a pair of non-negative factors. This approach addresses the curse of dimensionality by dimensionally reducing data while preserving meaningful information. Dimension reduction techniques rely on extracting high-quality features from large datasets. Machine learning algorithms offer a solution by learning and optimizing fea- ture representations, which often outperform manually crafted ones. Artificial Neural Networks (ANNs) emulate human brain processing and excel in handling complex and nonlinear data relationships. Deep neural network models learn hierarchical patterns from data without explicit human intervention, making them ideal for large datasets. Traditional NMF technique employs block coordinate descent to update input ma- trix factors, whereas, we aim for simultaneous update. Our research work attempts to combine the strengths of NMF and neural networks to develop novel architectures that optimize low-dimensional data representation. We introduce five novel neural net- work architectures for NMF, accompanied by tailored objective functions and learning strategies to enhance the low rank approximation of input matrices in our thesis. In this thesis, first of all, n2MFn2, a model based on shallow neural network architec- ture, has been developed. An approximation of the input matrix has been ensured by the formulation of an appropriate objective function and adaptive learning scheme. Ac- tivation functions and weight initialization strategies have also been adjusted to adapt to the circumstances. On top of this shallow model, two deep neural network models, named DN3MF and MDSR-NMF, have been designed. To achieve the robustness of the deep neural network framework, the models have been designed as a two stage architecture, viz., pre-training and stacking. To find the closest realization of the con- ventional NMF technique as well as the closest approximation of the input, a novel neu- ral network architecture has been proposed in MDSR-NMF. Finally, two deep learning models, named IG-MDSR-NMF and IG-MDSR-RNMF, have been developed to imitate the human-centric learning strategy while guaranteeing a distinct pair of factor ma- trices that yields a better approximation of the input matrix. In IG-MDSR-NMF and IG-MDSR-RNMF the layers not only receive the hierarchically processed input from the previous layer but also refer to the original data whenever needed to ensure that the learning path is correct. A novel kind of non-negative matrix factorization tech- nique known as Relaxed NMF has been developed for IG-MDSR-RNMF, in which only one factor matrix meets the non-negativity requirements while the other one does not. This novel NMF technique allows the model to generate the best possible low dimen- sional representation of the input matrix while the confrontation of maintaining a pair of non-negative factors is removed
Tags from this library: No tags from this library for this title. Log in to add tags.
Holdings
Item type Current library Call number Status Notes Date due Barcode Item holds
THESIS ISI Library, Kolkata 006.32 D979 (Browse shelf(Opens below)) Available E-Thesis. Guided by Prof. Rajat K. De TH628
Total holds: 0

Thesis (Ph.D). Indian Statistical Institute, 2025

Includes bibliography

Introduction -- Description of datasets, experimental procedure and evaluators -- Non-negative Matrix Factorization
Neural Network (n2MFn2) -- Deep Neural Network for Non-negative Matrix Factorization (DN3MF) -- Multiple Deconstruction Single Reconstruction Deep Neural Network Model for Non-negative Matrix Factorization (MDSR-NMF) -- Input Guided Multiple Deconstruction Single Reconstruction neural network for Non-negative Matrix Factorization (IG-MDSR-NMF) -- Input Guided Multiple Deconstruction Single Reconstruction neural network for Relaxed Non-negative Matrix -- Conclusions and Scope of Further Research Factorization (IG-MDSR-RNMF) -- Tables of p-values for n2MFn2 vs.
others depicting classification performances

Guided by Prof. Rajat K. De

Recent research has been driven by the abundance of data, leading to the develop- ment of systems that enhance understanding across various fields. Effective machine learning algorithms are crucial for managing high-dimensional data, with dimension reduction being a key strategy to improve algorithm efficiency and decision-making. Non-negative Matrix Factorization (NMF) stands out as a method that transforms large datasets into interpretable, lower-dimensional forms by decomposing a matrix with non-negative elements into a pair of non-negative factors. This approach addresses the curse of dimensionality by dimensionally reducing data while preserving meaningful information. Dimension reduction techniques rely on extracting high-quality features from large datasets. Machine learning algorithms offer a solution by learning and optimizing fea- ture representations, which often outperform manually crafted ones. Artificial Neural Networks (ANNs) emulate human brain processing and excel in handling complex and nonlinear data relationships. Deep neural network models learn hierarchical patterns from data without explicit human intervention, making them ideal for large datasets. Traditional NMF technique employs block coordinate descent to update input ma- trix factors, whereas, we aim for simultaneous update. Our research work attempts to combine the strengths of NMF and neural networks to develop novel architectures that optimize low-dimensional data representation. We introduce five novel neural net- work architectures for NMF, accompanied by tailored objective functions and learning strategies to enhance the low rank approximation of input matrices in our thesis. In this thesis, first of all, n2MFn2, a model based on shallow neural network architec- ture, has been developed. An approximation of the input matrix has been ensured by the formulation of an appropriate objective function and adaptive learning scheme. Ac- tivation functions and weight initialization strategies have also been adjusted to adapt to the circumstances. On top of this shallow model, two deep neural network models, named DN3MF and MDSR-NMF, have been designed. To achieve the robustness of the deep neural network framework, the models have been designed as a two stage architecture, viz., pre-training and stacking. To find the closest realization of the con- ventional NMF technique as well as the closest approximation of the input, a novel neu- ral network architecture has been proposed in MDSR-NMF. Finally, two deep learning models, named IG-MDSR-NMF and IG-MDSR-RNMF, have been developed to imitate the human-centric learning strategy while guaranteeing a distinct pair of factor ma- trices that yields a better approximation of the input matrix. In IG-MDSR-NMF and IG-MDSR-RNMF the layers not only receive the hierarchically processed input from the previous layer but also refer to the original data whenever needed to ensure that the learning path is correct. A novel kind of non-negative matrix factorization tech- nique known as Relaxed NMF has been developed for IG-MDSR-RNMF, in which only one factor matrix meets the non-negativity requirements while the other one does not. This novel NMF technique allows the model to generate the best possible low dimen- sional representation of the input matrix while the confrontation of maintaining a pair of non-negative factors is removed

There are no comments on this title.

to post a comment.
Library, Documentation and Information Science Division, Indian Statistical Institute, 203 B T Road, Kolkata 700108, INDIA
Phone no. 91-33-2575 2100, Fax no. 91-33-2578 1412, ksatpathy@isical.ac.in