Extracting effective scene features is important for remote sensing image classification. Generally, the multi-view features contain information of consistency and complementarity, and efficient integration of them is helpful to enhance the performance of remote sensing image classification. Although some recent methods are able to achieve promising results, they lack analysis of the inherent relevance of multiple-view features. Thus, we present a multi-view fusion optimization method via low-rank tensor decomposition. First, the Laplacian matrix is constructed by utilizing K-nearest neighbors to generate a set of low-dimensional eigenvalues. Second, a third-order tensor is built by combining the multiple-view Laplacian features, which are factorized into many components with rank 1 using the canonical polyadic decomposition. Finally, the alternating optimization model is reconstructed by utilizing the relationship among fibers and slices of a tensor to generate optimal low-dimensional embedding features. Experiments of classification on three remote sensing image data sets AID, WHU, and UCMerced are constructed. The results of the experiments show that the new proposed method achieves better performance than others.
With the continuous development of mobile positioning technology and smart phones, users can use smart phones to obtain and share location information of themselves and various surrounding points of interest (POI) anytime, anywhere, and share their own activity information, thus forming a location-based social network (LBSN). A large amount of user data is generated in LBSN. How to quantitative analyze the effects of context to manager’s view, how to extract the hidden user feature through the analysis of user data is of important research significance for the intelligent analysis of user characteristics. In this paper, first, a muti-dimensional user feature construction method is proposed, which extracts user feature from different influencing factors. Second, the fitness of user to a feature is analyzed. Third, a unified model is used to characterize this applicability. The method can promote the transformation from user data to user feature and help solve the problem of “explosive data but poor knowledge”. Experimental verification shows that the method is feasible and can realize the mining of muti-dimensional feature of users.
With the development of feature extraction technique, one image object can be represented by multiple heterogeneous features from different views that locate in high-dimensional space. Multiple features can reflect various characteristics of the same object; they contain compatible and complementary information among each other, integrating them together used in the special image processing application that can obtain better performance. However, facing these multi-view features, most dimensionality reduction methods fail to completely achieve the desired effect. Therefore, how to construct an uniform low-dimensional embedding subspace, which exploits useful information from multi-view features is still an important and urgent issue to be solved. So, we propose an innovative fusion dimension reduction method named tensor dispersion-based multi-view feature embedding (TDMvFE). TDMvFE reconstructs a feature subspace of each object by utilizing its k nearest neighbors, which preserves the underlying neighborhood structure of the original manifold in the lower dimensional mapping space. The new method fully exploits the channel correlations and spatial complementarities from the multi-view features by tensor dispersion analysis model. Furthermore, the method constructs an optimization model and derives an iterative procedure to generate the unified low dimensional embedding. Various evaluations based on the applications of image classification and retrieval demonstrate the effectiveness of our proposed method on multi-view feature fusion dimension reduction.
Multiview learning is an important method and widely used for feature fusion in the fields of image process or big data analysis. Determining how to integrate compatible and complementary information from multiple views is a crucial and challenging task. We present a multiview feature fusion optimization method for image retrieval based on matrix correlation. This method first extracts four view features (Gist, histogram of color, pyramid histogram of oriented gradients, and multitrend structure descriptor) from the image. Then these features are, respectively, converted to different graph Laplacian matrices through local embedding. Third, a multiview feature alternating optimization process is constructed using matrix correlation statistics that adaptively combines the different view feature maps to a unified, low-dimensional embedding. Finally, the fusion feature is used for image retrieval experiments. Various experimental results show that the proposed algorithm is an effective method.
In many computer vision applications, an object can be described by multiple features from different views. For instance, to characterize an image well, a variety of visual features is exploited to represent color, texture, and shape information and encode each feature into a vector. Recently, we have witnessed a surge of interests of combining multiview features for image recognition and classification. However, these features are always located in different high-dimensional spaces, which challenge the features fusion, and many conventional methods fail to integrate compatible and complementary information from multiple views. To address the above issues, multifeatures fusion framework is proposed, which utilizes multiview spectral embedding and a unified distance metric to integrate features, the alternating optimization is reconstructed by learning the complementarities between different views. This method exploits complementary property of different views and obtains a low-dimensional embedding wherein the different dimensional subspace. Various experiments on several benchmark datasets have verified the excellent performance of the proposed method.
Recently, global and local features have demonstrated excellent performance in image retrieval. However, there are some problems in both of them: (1) Local features particularly describe the local textures or patterns. However, similar textures may confuse these local feature extraction methods and get irrelevant retrieval results. (2) Global features delineate overall feature distributions in images, and the retrieved results often appear alike but may be irrelevant. To address problems above, we propose a fusion framework through the combination of local and global features, and thus obtain higher retrieval precision for color image retrieval. Color local Haar binary patterns (CLHBP) and the bag-of-words (BoW) of local features are exploited to capture global and local information of images. The proposed fusion framework combines the ranking results of BoW and CLHBP through a graph-based fusion method. The average retrieval precision of the proposed fusion framework is 83.6% on the Corel-1000 database, and its average precision is 9.9% and 6.4% higher than BoW and CLHBP, respectively. Extensive experiments on different databases validate the feasibility of the proposed framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.