KEYWORDS: Cameras, Hyperspectral imaging, Sensors, Standards development, Data acquisition, Point spread functions, Crosstalk, Signal to noise ratio, Data storage, Spectral response, Optical resolution
Hyperspectral cameras are optical instruments that are designed for capturing spatial information from a scene in such a way that each pixel contains the spectrum of the corresponding small scene area. One of the important factors when assessing camera performance, is the amount of spatial and spectral information in the acquired hyperspectral data. Traditionally, these are directly communicated to users as spatial pixel count and spectral band count. However, depending on the width of the sampling point spread function (SPSF) and of the spectral response function (SRF), the amount of acquired information may be significantly different for two cameras – even if the specified pixel and band counts are the same. As a better indication of the amount of acquired information, the authors suggest using two new specifications in the camera specification sheet: equivalent pixel count (EPC) and equivalent band count (EBC). Both specifications are derived from an optical resolution criterion such as full width at half-maximum (FWHM) of the SPSF and SRF. With the pixel count being a universally known and intuitive concept, and FWHM being a well-established resolution criterion, EPC and EBC specifications would allow for a quick and easy comparison between cameras with significantly different degree of optical blur, pixel count, and band count. EBC and EPC are drafted to be included in the upcoming standard dedicated to hyperspectral imaging devices. The standard is currently being finalized by P4001 working group, sponsored by the IEEE Geoscience and Remote Sensing Society standards committee.
Hyperspectral cameras are capable of obtaining highly useful data for geology, agriculture, urban planning, and many other applications. Several satellite-based hyperspectral cameras are currently operational, providing hyperspectral data to various users. Even large instruments usually have relatively large ground sampling distance (GSD): 10m or larger in 400 to 1000nm range and 30m or larger in 900 to 2500nm range. GSD is even coarser in hyperspectral cameras for microsatellites. Based on the information from PRISMA 2021 Workshop and our customer’s feedback, the most requested feature for satellite-based hyperspectral cameras is significantly improved GSD. Also, there is a strong demand for smaller microsatellite-compatible hyperspectral cameras. Due to lower mission cost, such cameras can provide hyperspectral data to more users. Additionally, microsatellite constellations could provide swath and revisit time that would be impossible for a single large satellite. Creating a hyperspectral camera with acceptable Signal-to-Noise Ratio (SNR) and small GSD, that would be still compatible with a small platform, is a big challenge. Our approach has been to create a hyperspectral camera that would surpass the current limitations of small satellite platforms, and would provide data that, for some specifications, exceed what is available for free from large instruments. Our focus has been on providing significantly improved GSD, small spatial and spectral misregistration, while keeping acceptable spectral sampling and SNR. The instrument development has been funded by the Norwegian Space Agency. One of the proposed instruments has been selected by the Norwegian Space Agency as the primary payload on an upcoming Norwegian In-Orbit Demonstrator satellite.
The HySpex Mjolnir-1024 hyperspectral camera provides a unique combination of small form factor and low mass combined with high performance and scientific grade data quality. The camera has spatial resolution of 1024 pixels, spectral resolution of 200 bands within 400 nm to 1000 nm wavelength range and F1.8 optics that ensures high light throughput. Rugged design with good thermal and mechanical stability makes Mjolnir-1024 an excellent option for a wide range of scientific applications for airborne UAV operations and field applications. The optical architecture is based on the high-end ODIN-1024 system and features a total FOV of 20 degrees with approximately 0.1 pixel residual keystone effect and even smaller residual smile effect after resampling. With a total mass of less than 4 kg including hyperspectral camera, data acquisition unit, IMU and GPS, the system is suitable for even relatively small UAVs. The system is generic and can be deployed on a wide range of UAVs with various downlink capabilities. The ground station software enables full control of the sensor settings and has the capability to show in real time the location of the UAV, plot the flight path of the UAV and display a georeferenced waterfall preview image in order to give instant feedback on spatial coverage. The system can be triggered automatically by the UAV’s flight management system, but can also be controlled manually. Mjolnir-1024 housing contains both the camera hardware and a high performance onboard computer. The computer enables advanced processing capabilities such as real-time georeferencing based on the data streams from the camera and INS. The system is also capable of performing real-time image analysis such as anomaly detection, NDVI and SAM. The data products can be overlaid on top of various background maps and images in real time. The real-time processing results can also be downlinked and displayed directly on the monitor of the ground station.
We propose a method for measuring and quantifying image quality in push-broom hyperspectral cameras in terms of spatial misregistration caused by keystone and variations in the point spread function (PSF) across spectral channels, and image sharpness. The method is suitable for both traditional push-broom hyperspectral cameras where keystone is corrected in hardware and cameras where keystone is corrected in postprocessing, such as resampling and mixel cameras. We show how the measured camera performance can be presented graphically in an intuitive and easy to understand way, comprising both image sharpness and spatial misregistration in the same figure. For the misregistration, we suggest that both the mean standard deviation and the maximum value for each pixel are shown. We also suggest how the method could be expanded to quantify spectral misregistration caused by the smile effect and corresponding PSF variations. Finally, we have measured the performance of two HySpex SWIR 384 cameras using the suggested method. The method appears well suited for assessing camera quality and for comparing the performance of different hyperspectral imagers and could become the future standard for how to measure and quantify the image quality of push-broom hyperspectral cameras.
SYSIPHE is an airborne hyperspectral imaging system, result of a cooperation between France (Onera and DGA) and
Norway (NEO and FFI). It is a unique system by its spatial sampling -0.5m with a 500m swath at a ground height of
2000m- combined with its wide spectral coverage -from 0.4μm to 11.5μm in the atmospheric transmission bands.
Its infrared component, named SIELETERS, consists in two high étendue imaging static Fourier transform
spectrometers, one for the midwave infrared and one for the longwave infrared. These two imaging spectrometers are
closely similar in design, since both are made of a Michelson interferometer, a refractive imaging system, and a large
IRFPA (1016x440 pixels). Moreover, both are cryogenically cooled and mounted on their own stabilization platform
which allows the line of sight to be controlled and recorded. These data are useful to reconstruct and to georeference the
spectral image from the raw interferometric images.
The visible and shortwave infrared component, named Hyspex ODIN-1024, consists of two spectrographs for VNIR and
SWIR based on transmissive gratings. These share a common fore-optics and a common slit, to ensure perfect
registration between the VNIR and the SWIR images. The spectral resolution varies from 5nm in the visible to 6nm in
the shortwave infrared.
In addition, the STAD, the post processing and archiving system, is developed to provide spectral reflectance and
temperature products (SRT products) from calibrated georeferenced and inter-band registered spectral images at the
sensor level acquired and pre-processed by SIELETERS and Hyspex ODIN-1024 systems.
The HySpex ODIN-1024 is an airborne VNIR-SWIR hyperspectral imaging system which advances the state of the art
with respect to both performance and system functionality. HySpex ODIN-1024 is designed as a single instrument for
both VNIR (0.4 to 1 μm wavelength) and SWIR (1 to 2.5 μm) rather than being a combination of two separate
instruments. With the common fore-optics of the single instrument, a more accurate and stable co-registration is achieved
across the full spectral range compared to having two individual instruments. For SWIR the across-the-track resolution is
1024 pixels, while for VNIR the user of the instrument can choose a resolution of either 1024 or 2048 pixels. In addition
to high spatial resolution, the optical design enables low smile- and keystone distortion and high sensitivity obtained
through low F-numbers of F1.64 for VNIR and F2.0 for SWIR. The camera utilizes state of the art scientific CMOS
(VNIR) and MCT (SWIR) sensors with low readout noise, high speed and spatial resolution. The system has an onboard-calibration
subsystem to monitor the stability of the instrument during variations in environmental conditions. It features
an integrated real-time processing functionality, enabling real-time detection, classification, and georeferencing. We
present an overview of the performance of the instrument and results from airborne data acquisitions.
KEYWORDS: Short wave infrared radiation, Data acquisition, Calibration, Sensors, Cameras, Data modeling, Spatial resolution, Data processing, Georeferencing, RGB color model
HySpex ODIN-1024 is a next generation state-of the-art airborne hyperspectral imaging system developed by Norsk Elektro Optikk AS. Near perfect coregistration between VNIR and SWIR is achieved by employing a novel common fore-optics design and a thermally stabilized housing. Its unique design and the use of state-of-the-art MCT and sCMOS sensors provide the combination of high sensitivity and low noise, low spatial and spectral misregistration (smile and keystone) and a very high resolution (1024 pixels in the merged data products). In addition to its supreme data quality, HySpex ODIN-1024 includes real-time data processing functionalities such as real-time georeferencing of acquired images. It also features a built-in onboard calibration system to monitor the stability of the instrument. The paper presents data and results from laboratory tests and characterizations, as well as results from airborne measurements.
We propose a method for measuring and quantifying image quality in push-broom hyperspectral cameras in terms of spatial misregistration—such as keystone and variations in the point-spread-function across spectral channels—and image sharpness. The method is suitable for both traditional push-broom hyperspectral cameras where keystone is corrected in hardware and cameras where keystone is corrected in post-processing, such as resampling and mixel cameras. We show how the measured camera performance can be presented graphically in an intuitive and easy-to-understand way, comprising both image sharpness and spatial misregistration in the same figure. For the misregistration we suggest that both the mean standard deviation and the maximum value for each pixel are shown. We also suggest a possible additional parameter for quantifying camera performance: probability of misregistration being larger than a given threshold. Finally, we have quantified the performance of a HySpex SWIR 384 camera prototype using the suggested method. The method appears well suited for assessing camera quality and for comparing the performance of different hyperspectral imagers, and could become the future standard for how to measure and quantify the image quality of push-broom hyperspectral cameras.
Current high-resolution hyperspectral cameras attempt to correct misregistration errors in hardware. This severely limits other specifications of the hyperspectral camera, such as spatial resolution and light gathering capacity. If resampling is used to correct keystone in software instead of in hardware, then these stringent requirements could be lifted. Preliminary designs show that a resampling camera should be able to resolve at least 3000–5000 pixels, while at the same time collecting up to four times more light than the majority of current high spatial resolution cameras. A virtual camera software, specifically developed for this purpose, was used to compare the performance of resampling and hardware corrected cameras. Different criteria are suggested for quantifying the camera performance. The simulations showed that the performance of a resampling camera is comparable to that of a hardware corrected camera with 0.1 pixel residual keystone, and that the use of a more advanced resampling method than the commonly used linear interpolation, such as high-resolution cubic splines, is highly beneficial for the data quality of the resampled image. Our findings suggest that if high-resolution sensors are available, it would be better to use resampling instead of trying to correct keystone in hardware.
Current high-resolution hyperspectral cameras attempt to correct misregistration errors in hardware. Usually, it is required that aberrations in the optical system must be controlled with precision 0.1 pixel or smaller. This severely limits other specifications of the hyperspectral camera, such as spatial resolution and light gathering capacity, and often requires very tight tolerances. If resampling is used to correct keystone in software instead of in hardware, then these stringent requirements could be lifted. Preliminary designs show that a resampling camera should be able to resolve at least 3000-5000 pixels, while at the same time collecting up to four times more light than the majority of current high spatial resolution cameras that correct keystone in hardware (HW corrected cameras). A Virtual Camera software, specifically developed for this purpose, was used to compare the performance of resampling cameras and HW corrected cameras. For the cameras where a large keystone is corrected by resampling, different resampling methods are investigated. Different criteria are suggested for quantifying performance, and the tested cameras are compared according to these criteria. The simulations showed that the performance of a resampling camera is comparable to that of a HW corrected camera with 0.1 pixel residual keystone, and that the use of a more advanced resampling method than the commonly used linear interpolation – such as for instance high-resolution cubic splines – is highly beneficial for the data quality of the resampled image. Our findings suggest that if high-resolution sensors are available, it would be better to use resampling instead of trying to correct keystone in hardware.
The SYSIPHE system is the state of the art airborne hyperspectral imaging system developed in European cooperation.
With a unique wide spectral range and a fine spatial resolution, its aim is to validate and quantify the information
potential of hyperspectral imaging in military, security and environment applications. The first section of the paper recalls the objectives of the project. The second one describes the sensors, their implementation onboard the platform and the data processing chain. The last section gives illustrations on the work in progress.
The new "scientific CMOS" (sCMOS) sensor technology has been tested for use in hyperspectral imaging. The sCMOS
offers extremely low readout noise combined with high resolution and high speed, making it attractive for hyperspectral
imaging applications. A commercial HySpex hyperspectral camera has been modified to be used in low light conditions
integrating an sCMOS sensor array. Initial tests of fluorescence imaging in challenging light settings have been
performed. The imaged objects are layered phantoms labelled with controlled location and concentration of fluorophore.
The camera has been compared to a state of the art spectral imager based on CCD technology. The image quality of the
sCMOS-based camera suffers from artifacts due to a high density of pixels with excessive noise, attributed to the high
operating temperature of the array. Image processing results illustrate some of the benefits and challenges of the new
sCMOS technology.
In this article we address the design and exploitation of a real field laboratory demonstrator combining active
polarimetric and multispectral modes in a single acquisition. Its buildings blocks, including a multi-wavelength
pulsed optical parametric oscillator at emission side, and a hyperspectral imager with polarimetric capability at
reception side, are described. The results obtained with this demonstrator are illustrated on some examples and
discussed.
A compact laboratory demonstrator providing both active polarimetric and multispectral images is designed. Its
buildings blocks include, at emission part, a multi-wavelength optical parametric oscillator and, at the reception part, a
polarimetric hyperspectral imager. Some of the results obtained with this system are illustrated and discussed. In
particular, we show that a multispectral polarimetric image brings additional information on the scene, especially when
interpreted in conjunction with its counterpart intensity image, since these two images are complementary in most cases.
Moreover, although hyperspectral imaging might be mandatory for recognition of small targets, we evidence that the
number of channels can be limited to a set of few wavelengths as far as target detection is considered.
Spectroscopic and polarimetric imaging have an increasing range of applications in remote sensing as well as inspection systems. It is shown how a limited polarimetric imaging capability can be added to a conventional hyperspectral camera based on a transmission grating imaging spectrometer. This is done by utilizing the undiffracted part of the light and separating its focus at the detector into two components using a simple walkoff plate composite. The resulting camera has full hyperspectral capability in the visible and near infrared spectral range, and in addition it forms broadband images for two orthogonal linear polarizations. Example imaging results are given and it is shown how polarimetric information can be used to detect manmade objects in a natural scene. A discussion of the limitations of the system is given.
Bruises can be important evidence in legal medicine, for example in cases of child abuse. Optical techniques can be used to discriminate and quantify the chromophores present in bruised skin, and thereby aid dating of an injury. However, spectroscopic techniques provide only average chromophore concentrations for the sampled volume,
and contain little information about the spatial chromophore distribution in the bruise. Hyperspectral imaging combines the power of imaging and spectroscopy, and can provide both spectroscopic and spatial information. In this study a hyperspectral imaging system developed by Norsk Elektro Optikk AS was used to measure the
temporal development of bruised skin in a human volunteer. The bruises were inflicted by paintball bullets. The wavelength ranges used were 400 - 1000 nm (VNIR) and 900 - 1700 nm (SWIR), and the spectral sampling intervals were 3.7 and 5 nm, respectively. Preliminary results show good spatial discrimination of the bruised
areas compared to normal skin. Development of a white spot can be seen in the central zone of the bruises. This central white zone was found to resemble the shape of the object hitting the skin, and is believed to develop in areas where the impact caused vessel damage. These results show that hyperspectral imaging is a promising
technique to evaluate the temporal and spatial development of bruises on human skin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.