Hyperspectral image (HSI) joint super-resolution (SR) in both spatial and spectral dimensions is an area of increasing interest in HSI processing. Although recent advances in deep learning (DL) frameworks have greatly improved the performance of joint SR reconstruction, existing methods learn discrete representations of HSI, ignoring real-world signals' continuous nature. In this paper, we propose a joint SR method based on implicit neural representation (INR), which learns local continuous representations of high spatial resolution hyperspectral images from the discrete inputs. Experiments on joint SR demonstrate that our method can achieve superior performance in comparison with state-of-the-art methods.
In the field of compressive sensing spectral imaging, an adaptive coding method based on a-prior knowledge is a way to obtain high-precision scene information. In this paper, we propose a method that uses low-resolution spatial-spectral information to split into homogeneous regions before generating adaptive coding matrices, in response to the shortcomings of most existing adaptive coding methods that use only spatial a priori information to generate coding matrices. The method uses coding devices in a compressive spectral imaging system to obtain spectral a-priori information with low spatial resolution. Based on this a-priori information, an adaptive segmentation method with region merging is used to obtain segmented images with certain regional homogeneity. The adaptive coding theory and this segmentation result are combined to generate the adaptive coding matrix, and then the compressive observation information of the scene and its complementary observation information are obtained. Based on these observations, the scene information with a high spatial resolution is calculated by the reconstruction algorithm. Simulation experiments show that the adaptive compressive coding method based on spectral image region segmentation has advantages in peak signal-to-noise ratio and structural consistency rating indexes compared with traditional adaptive coding methods.
The liquid crystal modulator devices (LCMD) have become an important technique in the field of hyperspectral imaging. However, the spectral resolution and accuracy of LCMD-based imaging spectrometers are limited due to their principle. To break this limitation and promote the application of LCMD, we propose a spectral reconstruction method using model-based neural networks. The calibrated spectral transmittance of LCMD and a carefully designed loss function are used to constraint the calculation. Experiments on reconstructing both substance spectra and spectral image cubes have validated the effectiveness and superiority of the proposed method.
This paper proposed a multi-channel spectral coding method for the coded aperture tunable filter spectral imager where the liquid crystal tunable filter is switched to encompass several selective spectral channels into one snapshot. The spectral coding and spatial coding could be designed concurrently.
Adaptive coding is a recently emerging strategy for designing coded apertures using a-priori information of target layout to improve the reconstruction accuracy of compressed information. Based on the compressive sensing imaging system, this paper proposes a method for designing adaptive coded apertures by using the low-resolution a-prior information of every spectral band of the scene. This method also enables coded apertures containing both positive and negative coding information can be implemented in a compressive spectral imaging system that uses DMD or similar device as a coding device. Compared with other designs that utilized various kinds of a-prior information, this solution does not require additional equipment and is in real-time. The simulation results show the proposed real-time adaptive coded apertures method can attain better reconstruction performance than that of a random coding method at relative high compression ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.