High-dimensional data samples tend to contain highly correlated features and are quite fragile to various noises and outliers in practical applications. For subspace clustering models, it has appeared to be inadequate to adopt conventional norm-based distance measurements to resist feature contaminations by undetermined types of noises. As a consequence, the learned low-dimensional representation is not always reliable and discriminative, which inevitably impedes the clustering performance. To remedy the deficiencies, we propose a robust subspace clustering model via auto-weighted and dual-structural representation (AWDSR) learning. Specifically, a feature-weighted reconstruction term is first introduced to the self-representation framework to automatically reinforce important features by measuring the self-representational reconstruction loss. As such, various types of noise features in data samples could be adaptively assigned with relatively small weights to reduce the residual elements of the reconstruction term. Moreover, an adaptive dual-structural constraint is simultaneously considered to guarantee discriminative block-diagonal representation. Then, an efficient alternative optimization method with guaranteed convergence and relatively low complexity is developed to optimize the challenging objective function. Finally, we carry out extensive experiments to compare the AWDSR model with other state of the arts on one synthetic data and six real-world databases. Experimental results fully demonstrate the effectiveness and superiority of the proposed approach in terms of accuracy and normalized mutual information metrics. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Machine learning
Data modeling
Databases
Matrices
Performance modeling
Sun
Statistical modeling