In recent years, the development of sensor technology in the area of hyperspectral imaging has been continuously moving forward. The result is a significant increase in availability, applications and, consequently, data volume. Compression is required to facilitate the transmission and storage of hyperspectral data sets. The high spectral correlation between adjacent bands allows for decorrelation approaches to compress the data with minimal loss of important information. Since it is not known a priori which feature is essential, the compression of hyperspectral data is a challenging task. In this paper, we introduce an approach to compress hyperspectral data using a Deep Autoencoder. An Autoencoder is an artificial neural network that first learns the important features from the data, and subsequently reconstructs the data from the reduced encoded representation. The evaluation is done by comparing the classification performance between the original and the reconstructed data. As a classifier we use the Adaptive Coherence Estimator to compare the spectral signatures. Performance is assessed by comparing the mean classification accuracy for a fixed false alarm rate. Additionally, the Signal to Noise Ratio and the spectral angle are used as metrics for evaluating the reconstruction performance. Airborne hyperspectral data were used in combination with simulated data, representing a linear mixture with different ratios of target and background spectra. Multiple target and background materials are tested to compare the performance. The selected data provide a representative set of target and background spectra to evaluate the compression method in relation to the detection limit. The compression rate is set to 4:1 and the reconstruction accuracy is investigated. Additionally, classification of noisy data is compared to the compression results to show the impact of information loss. If both results are similar, it can be deduced that the compression process is near-lossless.
|