Abstract:
Multi-view data clustering explores the consistency and complementary properties of different views to uncover the natural groups present in a data set. While multiple views are expected to provide more information for an improved learning performance, they pose their own set of unique challenges. The most important problems of multi-view clustering are the high-dimensional heterogeneous nature of different views, selection of relevant and complementary views while discarding noisy and redundant ones, preventing the propagation of noise from individual views during data integration, and capturing the lower dimensional non-linear geometry of each view.
In this regard, the thesis addresses the problem of multi-view data clustering, in the presence of high-dimensional, noisy, and redundant views. In order to select the appropriate views for data clustering, some new quantitative measures are introduced to evaluate the quality of each view. While the relevance measures evaluate the compactness and separability of the clusters within each view, the redundancy measures compute the amount of information shared between two views. These measures are used to select a set of relevant and non-redundant views during multi-view data integration.
The “high-dimension low-sample size” nature of different views makes the feature space geometrically sparse and the clustering computationally expensive. The thesis addresses these challenges by performing the clustering in the low-rank joint subspaces, extracted by feature-space, graph, and manifold based approaches. In feature-space based approach, the problem of incremental update of relevant eigenspaces is addressed for multi-view data sets. This formulation makes the extraction of joint subspace computationally less expensive compared to the principal component analysis. The graph based approaches, on the other hand, inherently take care of the data heterogeneity of different views, by modelling each view using a separate similarity graph. In order to filter out the background noise embedded in each view, a novel concept of approximate graph Laplacian is introduced, which captures the de-noised relevant information using the most informative eigenpairs of the graph Laplacian.
In order to utilize the underlying non-linear geometry of different views, the graph-based approach is judiciously integrated with the manifold optimization techniques. The optimization over Stiefel and k-means manifolds is able to capture the non-linearity and orthogonality of the cluster indicator subspaces. Finally, the problem of simultaneous optimization of the graph connectivity and clustering subspaces is addressed by exploiting the geometry and structure preserving properties of Grassmannian and symmetric positive definite manifolds.