We address the problem of the lossless compression of hyperspectral images and present two efficient algorithms inspired by the distributed source coding principle, which perform the compression by means of the blocked coset coding. In order to make full use of the intraband and interband correlation, the prediction error block scheme and the multiband prediction scheme are introduced in the proposed algorithms. In the proposed algorithms, the prediction error of each 16×16 pixel block is partitioned into prediction error blocks of size 4×4. The bit rate of the pixels corresponding to the 4×4 prediction error block is determined by its maximum prediction error. This processing takes advantage of the local correlation to reduce the bit rate efficiently and brings the negligible increase of additional information. In addition to that, the proposed algorithms can be easily parallelized by having different 4×4 blocks compressed at the same time. Their performances are evaluated on AVIRIS images and compared with several existing algorithms. The experimental results on hyperspectral images show that the proposed algorithms have a competitive compression performance with existing distributed compression algorithms. Moreover, the proposed algorithms can provide low-codec complexity and high parallelism, which are suitable for onboard compression.
Sparse unmixing (SU) has been investigated to select a small number of endmembers from a large spectral library, which is a pixel-based technique. In image-based collaborative sparse unmxing (CSU) techniques, pixels are forced to select the same small set of endmembers. In reality, the same small set of endmembers may be responsible for pixel construction within a homogeneous area. For an entire image, the endmember sets are often different. So, in this paper, we propose a region-based collaborative sparse unmixing (RCSU) algorithm, and the region may include nonlocal areas as long as they belong to the same type of homogeneous segments. Experimental results show that the overall performance of the proposed RCSU algorithm is better than that of image-based CSU or pixel-based SU.
A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.
Sparse representation-based classifier (SRC) is of great interest recently for hyperspectral image classification. It is assumed that a testing pixel is linearly combined with atoms of a dictionary. Under this circumstance, the dictionary includes all the training samples. The objective is to find a weight vector that yields a minimum L2 representation error with the constraint that the weight vector is sparse with a minimum L1 norm. The pixel is assigned to the class whose training samples yield the minimum error. In addition, collaborative representation-based classifier (CRC) is also proposed, where the weight vector has a minimum L2 norm. The CRC has a closed-form solution; when using class-specific representation it can yield even better performance than the SRC. Compared to traditional classifiers such as support vector machine (SVM), SRC and CRC do not have a traditional training-testing fashion as in supervised learning, while their performance is similar to or even better than SVM. In this paper, we investigate a generalized representation-based classifier which uses Lq representation error, Lp weight norm, and adaptive regularization. The classification performance of Lq and Lp combinations is evaluated with several real hyperspectral datasets. Based on these experiments, recommendation is provide for practical implementation.
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
Extreme learning machine (ELM) is of great interest to the machine learning society due to its extremely simple training step. Its performance sensitivity to the number of hidden neurons is studied under the context of hyperspectral remote sensing image classification. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets to greatly reduce computational cost. The kernel version of ELM (KELM) is also implemented with the radial basis function kernel, and such a linear relationship is still suitable. The experimental results demonstrated that when the number of hidden neurons is appropriate, the performance of ELM may be slightly lower than the linear SVM, but the performance of KELM can be comparable to the kernel version of SVM (KSVM). The computational cost of ELM and KELM is much lower than that of the linear SVM and KSVM, respectively.
In hyperspectral image classification, each hyperspectral pixel can be represented by linear combination of a few training samples in the training dictionary. Assuming the training dictionary is available, the hyperspectral pixel can be recovered using a minimal training samples by solving a sparse representation problem, then the weighted coefficients of training samples are obtained and the class of the pixel can be determined, the above process is called classification algorithm based on sparse representation. However, the traditional sparse classification algorithms have not fully utilized the spatial information and classification accuracy is relatively low. In this paper, in order to improve classification accuracy, a new sparse classification algorithm based on First-Order Neighborhood System Weighted (FONSW) constraint is proposed. Compared with other sparse classification algorithms, the experimental results show that the proposed algorithm has a smoother classification map and higher classification accuracy.
Spectral unmixing is an important research hotspot for remote sensing hyperspectral image applications. The unmixing process is comprised of the extraction of spectrally pure signatures (also called endmembers) and the determination of the abundance fractions of endmembers. Due to the inconspicuous signatures of pure spectra and the challenge of inadequate spatial resolution, sparse regression (SR) techniques are adopted in solving the linear spectral unmixing problem. However, the spatial information has not been fully utilized by state-of-art SR-based solutions. In this paper, we propose a new unmixing algorithm which involves in more suitable spatial correlations on sparse unmixing formulation for hyperspectral image. Our algorithm integrates the spectral and spatial information using Adapting Markov Random Fields (AMRF) which is introduced to exploit the spatial-contextual information. Compared with other SR-based linear unmixing methods, the experimental results show that the method proposed in this paper not only improves the characterization of mixed pixels but also obtains better accuracy in hyperspectral image unmixing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.