Change detection is a challenging task that has received much attention in the remote sensing field. Whereas numerous remote sensing change detection methods have been developed, the efficiency of these approaches is insufficient to meet the real-world applications’ requirements. Recently, deep learning methods have been largely used for remote sensing imagery change detection, most of these approaches are limited by their training dataset. However, adapting a pretrained convolutional neural network (CNN) on an image classification task to change detection is extremely challenging. An automatic land cover/use change detection approach based on fast and accurate frameworks for optical high-resolution remote sensing imagery is proposed. The fast framework is designed for applications that require immediate results with less complexity. The accurate framework is designed for applications that require high levels of precision, it decomposes large images into small processing blocks and forwards them into CNN. The proposed frameworks can learn transferable features from one task to another and escape the use of the expensive and inaccurate handcrafted features and the requirements of the big training dataset. A number of experiments were carried out to validate the proposed approach on three real bitemporal images. The experimental results illustrate the superiority of the proposed approach over other state-of-the-art methods.
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Feature extraction plays a key role in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very crucial to choose appropriate features to train a classifier, which is prerequisite. Inspired by the great success of Bag-of-Visual-Words (BoVW), we address the problem of feature extraction by proposing a novel feature extraction method for SAR target classification. First, Gabor based features are adopted to extract features from the training SAR images. Second, a discriminative codebook is generated using K-means clustering algorithm. Third, after feature encoding by computing the closest Euclidian distance, the targets are represented by new robust bag of features. Finally, for target classification, support vector machine (SVM) is used as a baseline classifier. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) public release dataset are conducted, and the classification accuracy and time complexity results demonstrate that the proposed method outperforms the state-of-the-art methods.
Understanding a scene provided by Very High Resolution (VHR) satellite imagery has become a more and more challenging problem. In this paper, we propose a new method for scene classification based on different pre-trained Deep Features Learning Models (DFLMs). DFLMs are applied simultaneously to extract deep features from the VHR image scene, and then different basic operators are applied for features combination extracted with different pre-trained Convolutional Neural Networks (CNN) models. We conduct experiments on the public UC Merced benchmark dataset, which contains 21 different areal categories with sub-meter resolution. Experimental results demonstrate the effectiveness of the proposed method, as compared to several state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.