This work addresses the problem of hyperspectral data compression and the evaluation of the reconstruction quality for different compression rates. Data compression is intended to transmit the enormous amount of data created by hyperspectral sensors efficiently. The information loss due to the compression process is evaluated by the complex task of spectral unmixing. We propose an improved 1D-Convolutional Autoencoder architecture with different compression rates for lossy hyperspectral data compression. Furthermore, we evaluate the reconstruction by applying metrics such as SNR and SA and compare them to the spectral unmixing results.
Spectral unmixing is often relying on a mixing model that is only an approximation. Artificial neural networks have the advantage of not requiring model knowledge. Additional advantages in the domain of spectral unmixing are the easy handling of spectral variability and the possibility to force the sum-to-one and the non-negativity constraints. However, they need a lot of significant training data to achieve good results. To overcome this problem, mainly for classification problems, augmentation strategies are widely used to increase the size of training datasets synthetically. Spectral unmixing can be considered as a regression problem, where data augmentation is also feasible. One intuitive strategy is to generate spectra based on abundances that do not occur in the training dataset, while taking spectral variability into account. For the implementation of this approach, we use a convolutional neural network (CNN), where the input variables are extended by random values. This allows spectral variability to be taken into account. The random inputs are re-sampled for each data point in every epoch. During training the CNN learns the mixing model and the characteristic spectral variability of the training dataset. Additional spectra can be generated afterwards for any given abundances to extend the original training dataset. Because the generative CNN minimizes the error between generated spectra and the corresponding ground truth for the whole dataset during training, the variance of the spectra based on the same abundances is lower than in the training data. We have investigated two approaches for improvement. One is to increase the variance of the random input variables when generating new spectra. For the second, the estimated covariance matrices are considered by the objective function. The presented method is evaluated with real data, which were captured in our image processing laboratory. We found that the augmentation of the training dataset with the presented strategy leads to an improvement for spectral unmixing of the test dataset.
Spectral unmixing aims to determine the relative amount (so-called abundances) of raw materials (so-called endmembers) in hyperspectral images (HSI). Libraries of endmember spectra are often given. Since the linear mixing model assigns one spectrum to each raw material, the endmember variability is not considered. Computationally costly algorithms exist to still derive precise abundances. In the method proposed in this work, we use only the pseudoinverse of the matrix of the endmember spectra to estimate the abundances. As can be shown, this approach circumvents the necessity of acquiring a HSI and is less computationally costly. To become robust against model deviations, we iteratively estimate the abundances by modifying the matrix of the endmember spectra used to derive the pseudoinverse. The values to modify each endmember spectrum are derived involving the singular value decomposition and the grade of violation of physical constraints to the abundances. Unlike existing algorithms, we account for the endmember variability and force simultaneously to meet physical constraints. Evaluations of samples for material mixtures, such as mixtures of color powders and quartz sands, show that more accurate abundance estimates result. A physical interpretation of these estimates is enabled in most cases.
Regardless whether mosaics, material surfaces or skin surfaces are inspected their texture plays an important role. Texture is a property which is hard to describe using words but it can easily be described in pictures. Furthermore, a huge amount of digital images containing a visual description of textures already exists. However, this information becomes useless if there are no appropriate methods to browse the data. In addition, depending on the given task some properties like scale, rotation or intensity invariance are desired. In this paper we propose to analyze texture images according to their characteristic pattern. First a classification approach is proposed to separate regular from non-regular textures. The second stage will focus on regular textures suggesting a method to sort them according to their similarity. Different features will be extracted from the texture in order to describe its scale, orientation, texel and the texel’s relative position. Depending on the desired invariance of the visual characteristics (like the texture’s scale or the texel’s form invariance) the comparison of the features between images will be weighted and combined to define the degree of similarity between them. Tuning the weighting parameters allows this search algorithm to be easily adapted to the requirements of the desired task. Not only the total invariance of desired parameters can be adjusted, the weighting of the parameters may also be modified to adapt to an application-specific type of similarity. This search method has been evaluated using different textures and similarity criteria achieving very promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.