Hyperspectral cameras are a key enabling technology in precision agriculture, biodiversity monitoring, and ecological research. Consequently, these applications are fueling a growing demand for devices that are suited to widespread deployment in such environments. Current hyperspectral cameras, however, require significant investment in post-processing, and rarely allow for live capture assessments. Here, we introduce a novel hyperspectral camera that combines live spectral data and high-resolution imagery. This camera is suitable for integration with robotics and automated monitoring systems. We explore the utility of this camera for applications including chlorophyll detection and live display of spectral indices relating to plant health. We discuss the performance of this novel technology and associated hyperspectral analysis methods to support an ecological study of grassland habitats at Wytham Woods, UK.
The scattering properties of materials such as coated and painted surfaces are important in the design of low observable materials. These properties are also important to enable accurate modeling of targets in a scene of different background materials. The distribution of light scatter from surfaces can be determined by measurements of the Bidirectional Reflectance Distribution Function (BRDF) using devices such as scatterometers. The BRDF should ideally be possible to measure both in and outside the plane-of-incidence in order to characterize both isotropic and anisotropic scatter and with suitably high angular resolving power and signal to noise ratio at the wavelengths of interest. Both narrow-band light sources (e.g. lasers) and broad-band light sources in combination with spectral band pass filters may be used in combination with appropriate detectors. This type of instrumentation may consist of complex mechanically moving parts and optics requiring careful alignment to the sample surface to be measured. To understand the synergies and discrepancies between the outputs of different BRDF instruments measuring the same sample set, we have compared BRDF measurement results between our research laboratories in a round robin comparison of an agreed set of sample surfaces and measurement geometries and wavelengths. In this paper, the results from this study will be presented and discussed.
KEYWORDS: Hyperspectral imaging, Reflectivity, High dynamic range imaging, Signal to noise ratio, Data acquisition, Error analysis, Imaging systems, Spectrometers, Sensors, Atmospheric modeling
This paper demonstrates the use of high dynamic range processing applied to the specific technique of hyper-spectral imaging with linescan spectrometers. The technique provides an improvement in signal to noise for reflectance estimation. This is demonstrated for field measurements of rural imagery collected from a ground-based linescan spectrometer of rural scenes. Once fully developed, the specific application is expected to improve the colour estimation approaches and consequently the test and evaluation accuracy of camouflage performance tests. Data are presented on both field and laboratory experiments that have been used to evaluate the improvements granted by the adoption of high dynamic range data acquisition in the field of hyperspectral imaging. High dynamic ranging imaging is well suited to the hyperspectral domain due to the large variation in solar irradiance across the visible and short wave infra-red (SWIR) spectrum coupled with the wavelength dependence of the nominal silicon detector response. Under field measurement conditions it is generally impractical to provide artificial illumination; consequently, an adaptation of the hyperspectral imaging and re ectance estimation process has been developed to accommodate the solar spectrum. This is shown to improve the signal to noise ratio for the re ectance estimation process of scene materials in the 400-500 nm and 700-900 nm regions.
Military land platforms are often deployed around the world in very different climate zones. Procuring vehicles in a large range of camouflage patterns and colour schemes is expensive and may limit the environments in which they can be effectively used. As such this paper reports a modelling approach for use in the optimisation and selection of a colour palette, to support operations in diverse environments and terrains. Three different techniques were considered based upon the differences between vehicle and background in L*a*b* colour space, to predict the optimum (initially single) colour to reduce the vehicle signature in the visible band. Calibrated digital imagery was used as backgrounds and a number of scenes were sampled. The three approaches used, and reported here are a) background averaging behind the vehicle b) background averaging in the area surrounding the vehicle and c) use of the spatial extension to CIE L*a*b*; S-CIELAB (Zhang and Wandell, Society for Information Display Symposium Technical Digest, vol. 27, pp. 731-734, 1996). Results are compared with natural scene colour statistics. The models used showed good agreement in the colour predictions for individual and multiple terrains or climate zones. A further development of the technique examines the effect of different patterns and colour combinations on the S-CIELAB spatial colour difference metric, when scaled for appropriate viewing ranges.
Primarily focused at military and security environments where there is a need to identify humans covertly and remotely; this paper outlines how recovering human gait biometrics from a multi-spectral imaging system can overcome the failings of traditional biometrics to fulfil those needs. With the intention of aiding single camera human gait recognition, an algorithm was developed to accurately segment a walking human from multi-spectral imagery. 16-band imagery from the image replicating imaging spectrometer (IRIS) camera system is used to overcome some of the common problems associated with standard change detection techniques. Fusing the concepts of scene segmentation by spectral characterisation and background subtraction by image differencing gives a uniquely robust approach. This paper presents the results of real trials with human subjects and a prototype IRIS camera system, and compares performance to typical broadband camera systems.
Airborne surveillance and targeting sensors are capable of generating large quantities of imagery, making it difficult for the user to find the targets of interest. Automatic target identification (ATI) can assist this process by searching for target-like objects and classifying them, thus reducing workload. ATI algorithms, developed in the laboratory by QinetiQ, have been implemented in real-time on ruggedised processing capable of flight. A series of airborne tests has been carried out to assess the performance of the ATI under real world conditions, using a Wescam EO/IR turret as the source of imagery. The tests included examples of military vehicles in urban and rural scenarios, with varying degrees of hide and concealment. Tests were conducted in different weather conditions to assess the robustness of the sensor and ATI combination. This paper discusses the tests carried out and the performance of the ATI achieved as a function of the test parameters. Conclusions are drawn as to the current state of ATI and its applicability to military requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.