PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488307
Multi-pass search and reconnaissance missions provide unique opportunities for hyperspectral target detection systems to operate at drastically reduced false alarm rates. In the simplest example, confirmed false alarms generated by an anomaly detector on an initial pass can be archived for future reference. A more sophisticated approach captures false alarm signatures to update background clutter models that inform detection algorithms. Or if detections are confirmed as targets on one pass, then matched-filter maps based on their spectra can be compared over time to monitor changes. As a final example, when sub-pixel registration is feasible, multi-temporal spectral covariance relations can be estimated from the data and used to detect anomalous changes at low false alarm rates, using no target signature information. All but the simplest of such methods require that the spectral evolution of a terrestrial background -- its chromodynamics -- be modeled sufficiently that naturally occurring changes are not confused with unnatural ones. This paper describes several detection paradigms that rely on multi-pass missions. Optimal linear algorithms to predict scene and target evolution are discussed, as are more realistic methods with relaxed operational requirements. One of these, called Covariance Equalization, is shown to perform nearly as well as the minimum error solution based on the matrix Wiener filter, which requires subpixel registration accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488562
We use DIRSIG to evaluate algorithms for recognizing 3D objects defined by faces of different orientations and different materials. The experiments consider varying object pose as well as variable environmental conditions. Objects are represented using subspaces defined for the 0.4-2.5 micron spectral range. Spatial resolutions are considered that provide mixtures of multiple object surfaces and background. For recognizing 3-D objects in cluttered backgrounds, the orthogonal projection ratio (OPR) is proposed to minimize the effects of noise and approximation error. The experiments consider varying object pose as well as variable environmental conditions. Background clutter is represented using spectral subspaces that are estimated from the image data. The experiments consider the recognition of several 3D objects with various geometries and surface materials. Both desert and urban scenes are considered as well as a range of ground spatial distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487069
We propose a novel approach for identifying the "most unusual" samples in a data set, based on a resampling of data attributes. The
resampling produces a "background class" and then binary
classification is used to distinguish the original training set from
the background. Those in the training set that are most like the
background (i.e., most unlike the rest of the training set) are considered anomalous. Although by their nature, anomalies do not
permit a positive definition (if I knew what they were, I wouldn't
call them anomalies), one can make "negative definitions" (I can say what does not qualify as an interesting anomaly). By choosing different resampling schemes, one can identify different kinds of anomalies. For multispectral images, anomalous pixels correspond to locations on the ground with unusual spectral signatures or, depending on how feature sets are constructed, unusual spatial textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487030
Anomaly detection in hyperspectral imagery is a potentially powerful approach for detecting objects of military interest because it does not require atmospheric compensation or target signature libraries. A number of methods have been proposed in the literature, most of these require a parametric model of the background probability distribution to be estimated from the data. There are two potential difficulties with this. First a parametric model must be postulated which is capable of describing the background statistics to an adequate approximation. Most work has made use of the multivariate normal distribution. Secondly the parameters must be estimated sufficiently accurately - this can be problematic for the covariance matrix of high dimensional hyperspectral data. In this paper we present an alternative view and investigate the capabilities of anomaly detection algorithms starting from a minimal set of assumptions. In particular we only require the background pixels to be samples from an independent and identically distributed (iid) process, but do not require the construction of a model for this distribution. We investigate a number of simple measures of the 'strangeness' of a given pixel spectra with respect to the observed background. An algorithm is proposed for detecting anomalies in a self-consistent way. The effectiveness of the algorithms is compared with a well-known anomaly detection algorithm from the literature on real hyperspectral data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.493055
As hyperspectral remote sensing technology migrates into operational systems, there is an urgent need to understand the phenomenology associated with the collection parameters and how they relate to the quality of the information extracted from the spectral data for different applications. If such relationships can be established, data collection requirements and tasking strategies can then be formulated for these applications. This paper describes a functional expression or spectral quality equation that has been established for object/anomaly detection in the reflective region (0.4 to 2.5 microns) of the spectrum. This spectral quality equation relates the collection parameters (i.e. spatial resolution, spectral resolution, signal-to-noise ratio, and scene complexity) to the probability of correct detection (Pd) for object/anomaly detection at a given probability of false alarms (Pfa). Follow-on work will be performed to establish a spectral quality equation for material identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487052
The problem of the automatic detection and identification of military vehicles in hyperspectral imagery has many possible solutions. The availability and utility of library spectra and the ability to atmospherically correct image data has great influence on the choice of approach. This paper concentrates on providing a robust solution in the event that library spectra are unavailable or unreliable due to differing atmospheric conditions between the data and reference. The development of a number of techniques for the detection and identification of unknown objects in a scene has continued apace over the past few years. A number of these techniques have been integrated into a "Full System Model" (FSM) to provide an automatic and robust system drawing upon the advantages of each. The FSM makes use of novel anomaly detectors and spatial processing to extract objects of interest in the scene which are then identified by a pre-trained classifier, typically a multi-class support vector machine. From this point onwards adaptive feedback is used to control the processing of the system. Stages of the processing chain may be augmented by spectral matching and linear unmixing algorithms in an effort to achieve optimum results depending upon the type of data. The Full System Model is described and the boost in performance over each individual stage is demonstrated and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Similarity Measures and Data Dimensionality Reduction I
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487180
In this paper we examine how the projection of hyperspectral data into smaller dimensional subspaces can effect the propagation of error. In particular, we show that the nonorthogonality of endmembers in the linear mixing model can cause small changes in band space (as, for example, from the addition of noise) to lead to relatively large changes in the estimated abundance coefficients. We also show that increasing the number of endmembers can actually lead to an increase in the amount of possible error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.484876
For dimension reduction of hyperspectral imagery, we propose a modification to Principal Component Analysis (PCA), Karhunen-Loeve Transform, by choosing a set of basis vectors corresponding to the proposed transformation to be not only orthonormal but also wavelets. Although the eigenvectors of the covariance matrix of PCA minimize the mean square error over all other choices of orthonormal basis vectors, we will show that the proposed set of wavelet basis vectors have several desirable properties. After reducing the dimensionality of the data, we perform a supervised classification of the original and reduced data sets, compare the results, and assess the merits of such transformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488081
This paper addresses the problem of estimating the dimension of a hyperspectral image. Spanning and intrinsic dimension concepts are studied as ways to determine the number of degrees of freedom needed to represent a Hyperspectral Image. Algorithms for the estimation of spanning and intrinsic dimension are reviewed and applied to hyperspectral images. Estimators are evaluated and compared using simulated and AVIRIS data. The final objective of this work is to develop an algorithm to determine the number of bands to select in a band subset selection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497142
This presentation is an overview of the past and current imaging spectrometers projects at ABB Bomem. Recent spectral imager hardware projects will be presented in more details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488238
The following paper describes a recent data collection exercise in which the WAR HORSE visible-near-infrared hyperspectral imaging sensor and IRON HORSE short-wave-infrared hyperspectral imaging sensor were employed in the collection of wide-area hyperspectral data sets. A preliminary analysis of the data has been performed and results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497540
PHIRST Light is a visible and near-infrared (VNIR) hyperspectral imaging sensor that has been assembled at the Naval Research Laboratory (NRL) using off-the-shelf components. It consists of a Dalsa 1M60 camera mated to a CRI VariSpec liquid crystal tunable filter (LCTF) and a conventional 75mm Pentax lens. This system can be thought of as the modern equivalent of a filter-wheel sensor. Historically, the problem with such sensors has been that images for different wavelengths are collected at different times. This causes spectral correlation problems when the camera is not perfectly still during the collection time for all bands (such as when it is deployed on an airborne platform). However, the PHIRST Light sensor is hard mounted in a Twin Otter aircraft, and is mated to a TrueTime event capture board, which records the precise GPS time of each image frame. Combining this information with the output of a CMIGITS INS/GPS unit permits precise coregistration of images from multiple wavelengths, and allows the formation of a conventional hyperspectral image cube. In this paper we present an overview of the sensor and its deployment, describe the processing steps required to produce coregistered hyperspectral cubes, and show detection results for targets viewed during the Aberdeen Collection Experiment (ACE).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Harvey C. Schau, Dennis J. Garrood, Garrett L. Hall, Ross E. Soulon
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.484877
Tomographic Imaging Spectrometers have proven to be a cost-effective means to achieve moderate resolution spectral imaging. Instruments have been constructed in the visible which have demonstrated acceptable performance. Infrared instruments have been developed and tested as proof of concept, however optimization issues have remained. In this paper we discuss the tradeoff between optical design, disperser design, and mathematical restoration. While the final design choice is often application dependent, we show that issues such as more effective use of the focal plane array, increasing signal to noise ratio, and removal of self-emission in the infrared, all impact the restoration algorithm and have tradeoff issues associated with each. We introduce the Field Multiplexed Dispersive Imaging Spectrometer (FMDIS), an alternate tomographic imaging spectrometer design. Results of FMDIS designs and restorations will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calibration for Remote Spectrometry and Radiometry
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487342
A procedure has been developed to measure the band-centers and bandwidths for imaging spectrometers using data acquired by the sensor in flight. This is done for each across-track pixel, thus allowing the measurement of the instrument's slit curvature or spectral 'smile'. The procedure uses spectral features present in the at-sensor radiance which are common to all pixels in the scene. These are principally atmospheric absorption lines. The band-center and bandwidth determinations are made by correlating the sensor measured radiance with a modelled radiance, the latter calculated using MODTRAN 4.2. Measurements have been made for a number of instruments including Airborne Visible and Infra-Red Imaging Spectrometer (AVIRIS), SWIR Full Spectrum Imager (SFSI), and Hyperion. The measurements on AVIRIS data were performed as a test of the procedure; since AVIRIS is a whisk-broom scanner it is expected to be free of spectral smile. SFSI is an airborne pushbroom instrument with considerable spectral smile. Hyperion is a satellite pushbroom sensor with a relatively small degree of smile. Measurements of Hyperion were made using three different data sets to check for temporal variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487042
Several of the sensor technologies employed for producing hyperspectral images make use of dispersion across an array to generate the spectral content along a 'line' in the scene and then use scanning to build up the other spatial dimension of the image. Infrared staring arrays rarely achieve 100% fully functioning pixels. In single-band imaging applications 'dead' elements do not cause a problem because simple spatial averaging of neighboring pixels is possible (assuming that a pixel is similar in intensity to its neighbors is a reasonably good approximation). However, when the array is used as described above to produce a spectral image, dead elements result in missing spatial and spectral information. This paper investigates the use of several novel techniques to replace this missing information and assesses them against image data of different spatial and spectral resolutions with the aim of recommending the best technique to use based on the sensor specification. These techniques are also benchmarked against naive spatial averaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488364
Large size calibration panels are frequently required as reference points for in-scene calibration of remotely sensed spectral data. However, most commercially manufactured calibration panels are costly and sometimes present spectral crossover problems. The panels described in this report are made from readily available fabrics and can provide a lower cost alternative. Four 5.5 x 6.7 m fabric panels that have nominal reflectance of 85%, 38%, 18% or 3% were tested. These fabric panels cost approximately $700 to construct and the materials are available at most hardware stores, which facilitated field panel construction. When properly deployed on a uniform dark-toned surface such as asphalt, gravel, or soil, these alternative panels provide calibration for the entire 0.4 - 2.5 μm spectrum and do not exhibit spectral crossover problems. Remote sensing programs that do not have access to or resources to acquire commercial panels may find these fabric panels a suitable alternative for in-scene calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487142
Spectroscopic measurements of infrared CO2 transitions in gas plumes are reported, and evaluated for their potential to yield a reliable remote sensing technique for determination of plume temperature. Measurments were made on two types of plumes: a sideways-directed plume from a vehicle exhaust, and a stack plume from a propane-burning portable plume generator. Modeling of CO2 emission near 4.25 μm from the portable plume generator does not yield a temperature diagnostic due to heavy and unpredictable atmospheric absorption. The 4.25 μm band is optically thick in the vehicle exhaust plume measurements. For the vehicle plume, the blackbody Planck equation is used to derive temperatures that agree with results of thermocouple measurements. The ratio of optically thin signals obtained in the vicinity of the 4.25 μm and 14.4 μm transitions is related to temperature in accordance with Boltzmann statistics. For these experimental conditions, the ratio calculated from the Boltzmann distribution has similar temperature dependence to the ratio obtained from the blackbody Planck equation. Because the ratio of signals obtained at two optically thin wavelengths is independent of concentration, this technique has promise for field measurement of plume temperatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.490164
Hyperspectral images in the long wave-infrared can be used for quantification of analytes in stack plumes. One approach uses eigenvectors of the off-plume covariance to develop models of the background that are employed in quantification. In this paper, it is shown that end members can be used in a similar way with the added advantage that the end members provide a simple approach to employ non-negativity constraints. A novel approach to end member extraction is used to extract from 14 to 53 factors from synthetic hyperspectral images. It is shown that the eigenvector and end member methods yield similar quantification performance and, as was seen previously, quantification error depends on net analyte signal.
Mismatch between the temperature of the spectra used in the estimator and the actual plume temperature was also studied. A simple model used spectra from three different temperatures to interpolate to an “observed” spectrum at the plume temperature. Using synthetic images, it is shown that temperature mismatch generally results in increases in quantification error. However, in some cases it caused an off-set of the model bias that resulted in apparent decreases in quantification error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488186
Spectral infrared emissivity measurements have been made of a variety of materials both with and without surface water. The surface water was either natural, in the form of dew or residual rainwater, or artificially introduced by manual wetting. Materials naturally high in water content were also measured. Despite the rather diverse spectral population of the underlying materials, they exhibited very similar, featureless, water-like spectra; spectrally flat with a very high magnitude across the emissive infrared region. The implication to exploitation personnel that may use emissive infrared hyperspectral image data is that in areas where condensation is likely (e.g. high humidity) or in areas populated with high water content background materials (e.g. highly vegetated areas), discrimination may prove an intractable problem with hyperspectral infrared sensing for ambient temperature targets. A target that exhibits a temperature either below or above ambient temperature may be detectable, but not identified, and may be more economically pursued with a far simpler, single-band midwave or longwave sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488188
Using a Fourier transform infrared field spectrometer, spectral infrared radiance measurements were made of several generated gas plumes against both a uniform sky and terrestrial background. Background temperature, spectral complexity, and physical homogeneity each influenced the success of emissive infrared spectral sensing technology in detecting and identifying the presence of a gas plume and its component constituents. As expected, high temperature contrast and uniform backgrounds provided the best conditions for detectibility and diagnostic identification. This report will summarize some of SITAC's findings concerning plume detectability, including the importance of plume cooling, plumes in emission and absorption, the effects of optical thickness, and the effects of condensing plumes on gas detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488931
Chemical detection using infrared hyperspectral imaging systems often is limited by the effects of variability of the scene background emissivity spectra and temperature. Additionally, the atmospheric up-welling and down-welling radiance and transmittance are difficult to estimate from the hyperspectral image data, and may vary across the image. In combination, these background variability effects are referred to as "clutter." A study has been undertaken at Pacific Northwest National Laboratory to determine the relative impact of atmospheric variability and background variability on the detection of trace chemical vapors. This study has analyzed Atmospheric Emitted Radiance Interferometer data to estimate fluctuations in atmospheric constituents. To allow separation of the effects of background and atmospheric variability, hyperspectral data was synthesized using large sets of simulated atmospheric spectra, measured background emissivity spectra, and measured high-resolution gas absorbance spectra. The atmosphere was simulated using FASCODE in which the constituent gas concentrations and temperatures were varied. These spectral sets were combined synthetically using a physics model to realize a statistical synthetic scene with a plume present in a portion of the image. Noise was added to the image with the level determined by a numerical model of the hyperspectral imaging instrument. The chemical detection performance was determined by applying a matched-filter estimator to both the on-plume and off-plume regions. The detected levels in the off-plume region were then used to determine the noise equivalent concentration path length (NECL), a measure of the chemical detection sensitivity. The NECL was estimated for numerous gases and for a variety of background and atmospheric conditions to determine the relative impact of instrument noise, background variability, and atmospheric variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487091
This paper investigates the effect of spectral screening on processing hyperspectral data through Independent Component Analysis (ICA). ICA is a multivariate data analysis method producing components that are statistically independent. In the context of the linear mixture model, the endmember abundances can be viewed as independent components, the endmembers forming the columns of the mixing matrix. In essence, the ICA processing can be seen as an alternative solution to endmember unmixing. In the context of feature extraction, each feature will be represented by an independent component, thus leading to maximum separability among features. Spectral screening is defined as the reduction of the image cube to a subset of representative pixel vectors with the goal of achieving a considerable speedup in further processing. At the base of spectral screening are the measure used to assess the similarity between two pixel vectors and a threshold value. Two pixel vectors are similar if the value yielded by the similarity measure is smaller than the threshold and dissimilar otherwise. The spectral screened subset has to be formed such that any two vectors in the subset are dissimilar and for any vector in the original image cube there is a similar vector in the subset. The method we present uses spectral angle as distance measure. A necessary condition for the success of spectral screening is that the result of processing the reduced subset can be extended to the entire data. Intuitively, a larger subset would lead to increased accuracy. However, the overhead introduced by the spectral screening is directly proportional to the subset size. Our investigation has focused on finding the “ideal” threshold value that maximizes both the accuracy and the speedup. The practical effectiveness of the methods was tested on HYDICE data. The results indicate that considerable speedup is obtained without a considerable loss of accuracy. The presented method leads to a significant increase in computational efficiency allowing faster processing of hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485933
Developing proper models for hyperspectral imaging (HSI) data allows for useful and reliable algorithms for data exploitation. These models provide the foundation for development and evaluation of detection, classification, clustering, and estimation algorithms. To date, real world HSI data has been modeled as a single multivariate Gaussian, however it is well known that real data often exhibits non-Gaussian behavior with multi-modal distributions. Instead of the single multivariate Gaussian distribution, HSI data can be model as a finite mixture model, where each of the mixture components need not be Gaussian. This paper will focus on techniques used to segment HSI data into homogenous clusters. Once the data has been segmented, each individual cluster can be modeled, and the benefits provided by the homogeneous clustering of the data versus non-clustering explored. One of the promising techniques uses the Expectation-Maximization (EM) algorithm to cluster the data into Elliptically Contoured Distributions (ECDs). A larger family of distributions, the family of ECDs includes the mutlivariate Gaussian distribution and exhibits most of its properties. ECDs are uniquely defined by their multivariate mean, covariance and the distribution of its Mahalanobis (or quadratic) distance metric. This metric lets multivariate data be identified using a univariate statistic and can be adjusted to more closely match the longer tailed distributions of real data. This paper will focus on three issues. First, the definition of the multivariate Elliptically Contoured Distribution mixture model will be developed. Second, various techniques will be described that segment the mixed data into homogeneous clusters. Most of this work will focus on the EM algorithm and the multivariate t-distribution, which is a member of the family of ECDs and provides longer tailed distributions than the Gaussian. Lastly, results using HSI data from the AVIRIS sensor will be shown, and the benefits of clustered data will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487475
In signal processing, signals are often treated as Gaussian random variables in order to simplify processing when, in fact, they are not. Similarly, in multispectral image processing, grayscale images from individual spectral bands do not have simple, predictable distributions nor are the bands independent from one another. Equalization and histogram shaping techniques have been used for many years to map signals and images to more desirable probability mass functions such as uniform or Gaussian. The ability to extend these techniques to multivariate random variables, i.e. jointly across multiple bands or channels, can be difficult due to an insufficient number of samples for constructing a multidimensional distribution. If successful, however, the resulting components can be made to be both uncorrelated and statistically independent. A method is presented here that achieves reasonably good equalization and Gaussian shaping in multispectral imagery. When combined with principle components analysis, the resulting components are not only uncorrelated, as would be expected, but are statistically independent as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488561
We examine the performance of illumination-invariant face recognition in hyperspectral images on a database of 200 subjects. The images are acquired over the near-infrared spectral range of 0.7-1.0 microns. Each subject is imaged over a range of facial orientations and expressions. Faces are represented by local spectral information for several tissue types. Illumination variation is modeled by low-dimensional linear subspaces of reflected radiance spectra. One hundred outdoor illumination spectra measured at Boulder, Colorado are used to synthesize the radiance spectra for the face tissue types. Weighted invariant subspace projection over multiple tissue types is used for recognition. Illumination-invariant face recognition is tested for various face rotations as well as different facial expressions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485985
Accurate coregistration of images from the Multispectral Thermal Imager (MTI) is needed to properly align bands for spectral analysis and physical retrievals, such as water surface temperature, land-cover classification, or small target identification. After accounting for spacecraft motion, optical distortion, and geometrical perspective, the irregularly-spaced pixels in the images must be resampled to a common grid. What constitutes an optimal resampling depends, to some extent, on the needs of the user. A good resampling trades off radiometric fidelity, contrast preservation for small objects, and cartographic accuracy -- and achieves this compromise without unreasonable computational effort. The standard MTI coregistration product originally used a weighted-area approach to achieve this irregular resampling, which generally over-smoothes the imagery and reduces the contrast of small objects. Recently, other resampling methods have been implemented to improve the final coregistered image. These methods include nearest-neighbor resampling and a tunable, distance-weighted resampling. We will discuss the pros and cons of various resampling methods applied to MTI images, and show results of comparing the contrast of small objects before and after resampling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485702
This paper describes an algorithm for the registration of imagery collected by the Multispectral Thermal Imager (MTI). The Automated Image Registration (AIR) algorithm is entirely image-based and is implemented in an automated fashion, which avoids any requirement for human interaction. The AIR method differs from the "direct georeferencing" method used to create our standard coregistered product since explicit information about the satellite's trajectory and the sensor geometry are not required. The AIR method makes use of a maximum cross-correlation (MCC) algorithm, which is applied locally about numerous points within any two images being compared. The MCC method is used to determine the row and column translations required to register the bands of imagery collected by a single SCA (band-to-band registration), and the row and column translations required to register the imagery collected by the three SCAs for any individual band (SCA-to-SCA registration). Of particular note is the use of reciprocity and a weighted least squares approach to obtaining the band-to-band registration shifts. Reciprocity is enforced by using the MCC method to determine the row and column translations between all pair-wise combinations of bands. This information is then used in a weighted least squares approach to determine the optimum shift values between an arbitrarily selected reference band and the other 15 bands. The individual steps of the AIR methodology, and the results of registering MTI imagery through use of this algorithm, are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488532
The Multispectral Thermal Imager (MTI) provides a highly informative source of remote sensing data. However, the analysis and exploitation can be very challenging. Effective utilization of this imagery by an image analyst typically requires a consistent and timely means of locating regions of interest. Many available image analysis/segmentation techniques are often slow, not robust to spectral variabilities from view to view or within a spectrally similar region, and/or require a significant amount of user intervention to achieve a segmentation corresponding to self-similar regions within the data. This paper discusses a segmentation approach that exploits the gross spectral shape of MTI data. In particular, we propose a nonparametric approach to perform coarse level segmentation that can stand alone or as a potential precursor to other image analysis tools. In comparison to previous techniques, the key characteristics of this approach are in its simplicity, speed, and consistency. Most importantly it requires relatively few user inputs and determines the number of clusters, their extent, and, data assignment directly from the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486960
The Savannah River Technology Center (SRTC) conducted four reflectance vicarious calibrations at Ivanpah Playa, California since July 2000 in support of the MTI satellite. The multi-year study shows temporal, spatial and spectral variability at the playa. The temporal variability in the wavelength dependent reflectance and emissivity across the playa suggests a dependency witt precipitation during the winter and early spring seasons. Satellite imagery acquired on September and November 2000, May 2001 and March 2002 in conjunction with ground truth during the September, May and March campaigns and water precipitation records were used to demonstrate the correlation observed at the playa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485801
The Savannah River Technology Center (SRTC) conducted four vicarious reflectance calibrations at Ivanpah Playa, California since July 2000 in support of the MTI satellite. The potential of the playa as a thermal calibration site was also investigated in the campaigns with a mobile Fourier transform infrared spectrometer. The multi-year study shows time and spatial variability in the spectral emissivity. The ground truth temperature and emissivity correlate quite well with the data from the MTI satellite imagery. The research paper will show the time-dependent emissivities measured during our ground truth campaigns and the corresponding satellite imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487210
Typical existing fire detection algorithms for airborne and satellite based imagers employ the Planckian radiation in the 3.5 -5 μm and 8 - 14 μm spectral regions. These algorithms can have high false alarm rates and furthermore, the issue of validation of subpixel detection is a lingering problem. We present an empirical testing of fire detection algorithms for controlled and uniform burning and hot targets of known area. Image data sets of the targets were captured at different altitudes with the Modular Imaging Spectrometer Instrument (MISI). MISI captures hyperspectral
VNIR and multispectral SWIR/MWIR/LWIR imagery. The known range of target areas ranges from larger than the MISI IFOV to less than 0.5% of the IFOV. The in situ temperatures were monitored with thermocouples and pyrometers. Spectroradiometric data of targets and backgrounds were also collected during the experiment. The data were analysed using existing algorithms as well as novel approaches. The algorithms are compared by determining the minimum resolvable
fire pixel fraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Richard K. Kiang, Stephanie M. Hulina, Penny M. Masuoka, David M. Claborn
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487016
Mosquito-born infectious diseases are a serious public health concern, not only for the less developed countries, but also for developed countries like the U.S. Larviciding is an effective method for vector control and adverse effects to non-target species are minimized when mosquito larval habitats are properly surveyed and treated. Remote sensing has proven to be a useful technique for large-area ground cover mapping, and hence, is an ideal tool for identifying potential larval habitats. Locating small larval habitats, however, requires data with very high spatial resolution. Textural and contextual characteristics become increasingly evident at higher spatial resolution. Per-pixel classification often leads to suboptimal results. In this study, we use pan-sharpened Ikonos data, with a spatial resolution approaching 1 meter, to classify potential mosquito larval habitats for a test site in South Korea. The test site is in a predominantly agricultural region. When spatial characteristics were used in conjunction with spectral data, reasonably good classification accuracy was obtained for the test site. In particular, irrigation and drainage ditches are important larval habitats but their footprints are too small to be detected with the original spectral data at 4-meter resolution. We show that the ditches are detectable using automated classification on pan-sharpened data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486376
To demonstrate the utility of EO-1 data, combined analysis of panchromatic, multispectral (ALI, Advanced Land Imager) and hyperspectral (Hyperion) data was conducted. In particular, the value added by HSI with additional spectral information will be illustrated. Data sets from Coleambally Irrigation Area, Australia on 7 March 2000 and San Francisco Bay area on 17 January 2000 are employed for the analysis. Analysis examples are shown for surface characterization, anomaly detection, spectral unmixing and image sharpening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486023
The problem of remote estimation of chlorophyll content in vegetation is considered. A lot of reflectance spectra have been recorded for winter wheat leaves with various chlorophyll content. The plots of the 1-st derivative of reflectance spectral curves have been computed and analyzed in respect interrelation with pigment content. The ratio of two maxima in the plots has been revealed as a correlating characteristic, which was used for chlorophyll estimation. To diminish the level of noise in 1-st derivative plots, producing by measuring system, the computing procedure have been applied by Savitzky and Golay formula using 2-d order polynomial estimation of 9-point convolution. Application of genetic algorithm to search of maximum positions in 1-st derivative plots has been tested with a positive result. Pair and multiple regression as well as neural net approach have been tested for estimation of chlorophyll content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487392
Unmixing hyperspectral images inherently transfers error from the original hyperspectral image to the unmixed fraction plane image. In essence by reducing the entire information content of an image down to a handful of representative spectra a significant amount of information is lost. In an image with low spectral diversity that obeys the linear mixture model (such as a simple geologic scene), this loss is negligible. However there exist inherent problems in unmixing a hyperspectral image where the actual number of spectrally distinct items in the image exceeds the resolving ability of an unmixing algorithm given sensor noise. This process is demonstrated here with a simple statistical analysis. Stepwise unmixing, where a subset of end-members is used to unmix each pixel provides a means of mitigating this error. The simplest case of stepwise unmixing, constrained unmixing, is statistically examined here. This approach provides a significant reduction in unmixed image error with a corresponding increase in goodness of fit. Some suggestions for future algorithms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487471
Hyperspectral images can be conveniently and quickly interpreted by detecting spectral endmembers present in the image and unmixing the image in terms of those endmembers. However, spectral diversity common in hyperspectral images leads to high errors in the unmixing process by increasing the likelihood that spectral anomalies will be detected as endmembers. We have developed an algorithm to detect target-like spectral anomalies in the image which are likely to detrimentally interfere with the endmember detection process. The hyperspectral image is preprocessed by detecting target-like spectra and masking them from the subsequent endmember detection analysis. By partitioning target-like spectra from the scene, a set of spectral endmembers is detected which can be used to more accurately unmix the image. The vast majority of data in the original image can be interpreted in terms of these detected spectral endmembers. The few spectra which represent the bulk of the spectral diversity in the scene can then be interpreted individually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487058
One of challenges in remote sensing image processing is subpixel detection where the target size is smaller than the ground sampling distance. In this case, targets of interest have their sizes less than the pixel resolution, therefore, embedded in a single pixel. Under such a circumstance, these targets can be only detected spectrally at subpixel level, not spatially as ordinarily conducted by classical image processing techniques. This paper investigates a more challenging issue than subpixel detection, which is subpixel target size estimation. More specifically, when a single pixel-embedded target is detected, we would like to know what is the size of this particular target within the pixel. Our proposed approach is fully constrained linear spectral unmixing (FCLSU), which allows us to estimate the abundance fraction of the target present in the pixel that determines the size of the target. In order to evaluate the proposed FCLSU, two sets of experiments are conducted, computer simulations and real HYDICE data, where computer simulations are used to plant targets to validate our approach and real data are used to demonstrate the utility of the FCLSU in practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487492
Hyperspectral image analysis is an important component of advanced hyperspectral image understanding. We present a new approach that identifies unique materials and the abundance of these materials in a hyperspectral image. This approach uses physical constraints on material abundances and reflectances, and avoids the presence of a dark material class by parameterizing pixel illumination. The results are optimally generated in both supervised and unsupervised modes. Applications of the image analysis approach are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497802
This paper addresses the use of multiplicative iterative algorithms to compute the abundances in unmixing of hyperspectral pixels. The advantage of iterative over direct methods is that they allow incorporation of positivity and sum-to-one constraints of the abundances in an easy fashion while also allowing better regularization of the solution for the ill-conditioned case. The derivation of two iterative algorithms based on minimization of least squares and Kulback-Leibler distances are presented. The resulting algorithms are the same as the ISRA and EMML algorithms presented in the emission tomography literature respectively. We show that the ISRA algorithm and not the EMML algorithm computes the maximum likelihood estimate of the abundances under Gaussian assumptions while the EMML algorithm computes a minimum distance solution based on the Kulback-Leibler generalized distance. In emission tomography, the EMML computes the maximum likelihood estimate of the reconstructed image. We also show that, since the unmixing problem is in general overconstrained and has no solutions, acceleration techniques for the EMML algorithm such as the RBI-EMML will not converge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Similarity Measures and Data Dimensionality Reduction II
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487044
Spectral angle mapper (SAM) has been widely used as a spectral similarity measure for multispectral and hyperspectral image analysis. It has been shown to be equivalent to Euclidean distance when the spectral angle is relatively small. Most recently, a stochastic measure, called spectral information divergence (SID) has been introduced to model the spectrum of a hyperspectral image pixel as a probability distribution so that spectral variations can be captured more effectively in a stochastic manner. This paper develops a new hyperspectral spectral discriminant measure, which is a mixture of SID and SAM. More specifically, let xi and xj denote two hyperspectral image pixel vectors with their corresponding spectra specified by si and sj. SAM is the spectral angle of xi and xj and is defined by [SAM(si,sj)]. Similarly, SID measures the information divergence between xi and xj and is defined by [SID(si,sj)]. The new measure, referred to as (SID,SAM)-mixed measure has two variations defined by SID(si,sj)xtan(SAM(si,sj)] and SID(si,sj)xsin[SAM(si,sj)] where tan [SAM(si,sj)] and sin[SAM(si,sj)] are the tangent and the sine of the angle between vectors x and y. The advantage of the developed (SID,SAM)-mixed measure combines both strengths of SID and SAM in spectral discriminability. In order to demonstrate its utility, a comparative study is conducted among the new measure, SID and SAM where the discriminatory power of the (SID,SAM)-mixed measure is significantly improved over SID and SAM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487534
In this article we present a method for hyperspectral band
selection that yields superior classification results while only
using a subset of the available bands. The approach originates
from a comprehensive physical and mathematical understanding of
the distance metrics used to compare hyperspectral signals, and it
exploits an exact decomposition of a common metric, the Spectral
Angle Mapper (SAM), to select bands which increase the angular
contrast between target classes. Using real spectroradiometer and
sensor data collected by the HYDICE sensor, the technique
significantly improves the discrimination performance for two
spectrally similar classes, while using only a fraction of the
available bands. The approach is extended to a hierarchical
architecture for material identification using spectral libraries
that is shown to outperform the traditional angle-based classifier
which employs all available bands. Consequently, better material
identification performance can be achieved using significantly
fewer bands, thus introducing dramatic benefits for the design and
utilization of spectral libraries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497002
A comparison of the spectral bands recommended through employment of different data separation measures and the reliability and robustness of these measures was performed on artificially generated target and background IR radiance data sets. The Mahalanobis distance, Signal to Clutter Ratio, Bhattacharya distance and Informational Difference criteria were employed in order to obtain the best single and paired spectral bands for data separation between two data classes of 'targets' and 'backgrounds' in day and night conditions. The results show that for conditions in which there is a distinct temperature difference between the two data classes, all the criteria perform similarly, with only small differences in the recommended spectral bands and general performance. However, in daylight conditions with multiple types of backgrounds and targets, criteria based on the assumption of concentrated data classes (SCR, Mahalanobis) tend to provide contradictory results, while those based on general statistical principles (Bhattacharya, Informational Difference) produce unequivocal results that are relatively unaffected by data set complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485942
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. This reduction must be done in a manner that minimizes the redundancy, maintaining the information content. This paper proposes methods for feature extraction and band subset selection based on Relative Entropy Criteria. The main objective of the feature extraction and band selection methods implemented is to reduce the dimensionality of the data maintaining the capability of discriminating objects of interest from the cluttered background. These methods accomplish the described goal by maximizing the difference between the data distribution of the lower dimensional subspace and the standard Gaussian distribution. The difference between the low dimensional space and the Gaussian distribution is measured using relative entropy, also known as information divergence. A Projection Pursuit unsupervised algorithm based on an optimization algorithm of the relative entropy is presented. An unsupervised version for selecting bands in hyperspectral data will be presented as well. The relative entropy criterion will measure the information divergence between the probability density function of the feature subset and the Gaussian probability density function. This augments the separability of the unknown clusters in the lower dimensional space. One advantage of these methods is that there is no use of labeled samples. These methods were tested using simulated data as well as remotely sensed data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497122
Several of the leading atmospheric compensation algorithms for LWIR hyperspectral data require the detection and exclusion low-emissivity objects from the analysis. In this paper, nine different methods for detection of low-emissivity objects are presented. In testing, it was found that the algorithms proposed suffered from temperature sensitivities. Further testing was accomplished without filtering to test the performance of Scaled and Unscaled ISAC under a range of environmental and system parameters. Detection performance is quantified directly in terms of probability of detection vs. probability of false alarm and in terms of atmospheric state parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487272
Spectral registration errors occur in hyperspectral (HS) data when the reported channel center wavelengths accompanying a data cube (commonly called the wavefile) are inaccurate. Poor spectral registration can lead to errors in water vapor retrievals and in the correction of other atmospheric gases. This, in turn, leads to erroneous overall atmospheric correction of HS data, and reduced exploitation performance. We have developed a method to detect poor spectral registration using major atmospheric spectral features. The spectral features we use are the Fraunhofer "G" line at 430 nm, to O2 absorption lines at 762 nm and 1268 nm, three water vapor absorption bands at 817 nm, 935 nm, and 1135 nm, and a CO2 absorption line at 2055 nm. We check the alignment of the average, uncorrected background features with MODTRAN-modeled spectral radiance data. We will present our approach to spectral registration and the wavefile correction method we developed, based on the accurate channel center wavelengths determined for the various atmospheric features. We will also present results from various sensor types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487621
The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488438
Passive, hyperspectral image data and bathymetric lidar data are complimentary data types that can be used effectively in tandem. Hyperspectral data contain information related to water quality, depth, and bottom type; and bathymetric lidar data contain precise information about the depth of the water and qualitative information about water quality and bottom reflectance. The two systems together provide constraints on each other. For example, lidar-derived depths can be used to constrain spectral radiative transfer models for hyperspectral data, which allows for the estimation of bottom reflectance for each pixel. Similarly, depths can be used to calibrate models, which permit the estimation of depths from the hyperspectral data cube on the raster defined by the spectral imagery. We demonstrate these capabilities by fusing hyperspectral data from the LASH and AVIRIS spectrometers with depth data from the SHOALS bathymetric laser to achieve bottom classification and increase the density of depth measurements in Kaneohe Bay, Hawaii. These capabilities are envisioned as operating modes of the next-generation SHOALS system, CHARTS, which will deploy a bathymetric laser and spectrometer on the same platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488363
Spectra were taken that describes free water, ice, and snow, and vegetation and inorganic backgrounds. The reflectance of water films, ranging from 0.008 to 5.35 mm, on a spectralon background varied with water depth and the water transmittance and absorbtance properties. Thin water films, > 3.5 mm, quenched the short wave infrared (SWIR) reflectance, even though moderate visible-near infrared reflectance occurred from the water-spectralon surfaces. Ice and snow have a similar number of absorption bands as water but their absorption maxima varied from those of water. River float-ice and glacial ice have diagnostic absorption features at 1.02 and 1.25 μm and negligible reflectance in the > 1.33 μm region. New powder snow, new wet snow, and older deep snow packs have similar shaped reflectance spectra. Thin snow accumulations readily masked the underlying surfaces. These snow pack surfaces have a small asymmetric absorption features at 0.90 μm and strong asymmetric absorption features at 1.02, 1.25, and 1.50 μm. These snow packs have measurable SWIR reflectance. An avalanche snow pack had low SWIR reflectance, which was similar to ice spectra. Water, ice and snow and ice surfaces have spectrally distinct features, which differentiates them and the background surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486311
Hyperspectral Remote Sensing has the potential to be used as an effective coral monitoring system from either space or airborne sensors. The problems to be addressed in hyperspectral imagery of coastal waters are related to the medium, which presents high scattering and absorption, and the object to be detected. The object to be detected, in this case coral reefs or different types of ocean floor, has a weak signal as a consequence of its interaction with the medium. The retrieval of information about these targets requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. This paper presents the development of algorithms that does not use labeled samples to detect coral reefs under coastal shallow waters. Synthetic data was generated to simulate data gathered using a high resolution imaging spectrometer (hyperspectral) sensor. A semi-analytic model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. Tikhonov method of regularization was used as a starting point in order to arrive at an inverse formulation that incorporates a priori information about the target. This expression will be used in an inversion process on a pixel by pixel basis to estimate the ocean floor signal. The a priori information is in the form of previously measured spectral signatures of objects of interest, such as sand, corals, and sea grass.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485935
A physically-constrained localized linear mixing model suitable
to process multi/hyperspectral imagery for Terrain Categorization
(TERCAT) applications is investigated. Unlike the basic spectral
linear mixing model that typically includes all potential endmembers in a set, simultaneously, in the model for each site in an image, the proposed approach restricts the local model at each site to a subset of endmembers, using localized spectral/spatial constraints to narrow the selection process. This approach is used to reduce the observed instability of conventional linear mixture analysis in addressing TERCAT problems for scenes with a large number of endmembers. Experiments are conducted on an 18 channel GERIS scene, airborne-collected over Northern Virginia, that contains a diverse range of terrain features, showing the benefit of this method as compared to the basic linear mixture analysis approach for TERCAT applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488331
The normal compositional model (NCM) simultaneously models subpixel mixing and intra-class variation in multidimensional imagery. It may be used as the foundation for the derivation of supervised and unsupervised classification and detection algorithms. Results from applying the algorithm to AVIRIS SWIR data collected over Cuprite, Nevada are described. The NCM class means are compared with library spectra using the Tetracorder algorithm. Of the eighteen classes used to model the data, eleven are associated with minerals that are known to be in the scene and are distinguishable in the SWIR, five are identified with Fe-bearing minerals that are not further classifiable using SWIR data, and the remaining two are spatially diffuse mixtures. The NCM classes distinguish (1) high and moderate temperature alunites, (2) dickite and kaolinite, and (3) high and moderate aluminum concentration muscovite. Estimated abundance maps locate many of the known mineral features. Furthermore, the NCM class means are compared with corresponding endmembers estimated using a linear mixture model (LMM). Of the eleven identifiable (NCM class mean, LMM endmember) pairs, ten are consistently identified, while the NCM estimation procedure reveals a diagnostic feature of the eleventh that is more obscure in the corresponding endmember and results in conflicting identifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486382
Conventional remote sensing classification techniques model the data in each class with a multivariate Gaussian distribution. Inadequacy of such algorithms stems from Gaussian distribution assumption for the class-component densities, which is only an assumption rather than a demonstrable property of natural spectral classes. In this paper, we present an Independent Component Analysis (ICA) based approach for unsupervised classification of multi/hyperspectral imagery. ICA employed for a mixture model, estimates the data density in each class and models class distributions with non-Gaussian structure (i.e. leptokurtic or platykurtic p.d.f.), formulating the ICA mixture model (ICAMM). It finds independent components and the mixing matrix for each class, using the extended information-maximization learning algorithm, and computes the class membership probabilities for each pixel. We apply the ICAMM for unsupervised classification of images from a multispectral sensor - Positive Systems Multi-Spectral Imager, and a hyperspectral sensor - Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Four feature extraction techniques: Principal Component Analysis, Segmented Principal Component Analysis, Orthogonal Subspace Projection and Projection Pursuit have been considered as a preprocessing step to reduce dimensionality of the hyperspectral data. The results demonstrate that the ICAMM significantly outperforms the K-means algorithm for land cover classification of remotely sensed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486731
Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485919
A segmentation algorithm for underwater multispectral images based on the Hough transform (HT) is presented. The segmentation algorithm consists of three stages: The first stage consists in computing the HT of the original image and segmenting the desired object in its boundary. The HT has several known challenges such as the end point (infinite lines) and the connectivity problem, which lead to false contours. Most of these problems are canceled over the next two stages. The second stage starts by clustering the original image. Fuzzy C-means clustering segmentation technique is used to capture the local properties of the desired object. In the third stage, the edges of the clustering segmentation are extended to the closest HT detected lines. The boundary information (HT) and local properties (Fuzzy C-means) of the desired object are fused together and false contours are eliminated. The performance of the segmentation algorithm is demonstrated in underwater multispectral images generated in laboratory containing known objects of varying size and shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.497020
This paper presents two approaches to ATR* by trainable algorithms. The first approach assumes that the measurements coming from the objects remain unchanged for the time passed between the stages of learning and recognition. For outdoor scenes such an approach is viable when both learning and recognition can be completed within minutes, which is difficult to achieve in practice. More realistic is to acquire training image data short before surveying the scene of interest. Then computer-intensive or interactive learning algorithms can be applied. We exemplify this approach qualitatively by detecting buildings and asphalt roads in a typical urban scene from AISA hyperspectral sensor data. The second, new approach we derive takes into account the joint changes of all targets and backgrounds under dynamic external factors. This requires multitemporally surveying an area that is specially selected for training an ATR system. Then at the future recognition stage the system can take advantage of the learning results in the real-time mode. Experimental verification of the new approach was performed using a fixed FLIR-type camera that surveyed the site containing more than 50 thermally different objects, whereas learning and recognition were spaced one week apart. The thermal joint prediction model proved working and was applied for detecting and identifying a scene anomaly -- an intruder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486952
NIMA has initiated a program to evaluate high resolution commercial sensors developed by Space Imaging, Digital Globe, Orbimage and others. This paper presents the results of accuracy evaluations of Ikonos orthophotos, stereo pairs and triangulation block adjustments. Ikonos image products were compared to a globally distributed set of Ground Control Points (GCPs) and the calculated geometric accuracy was found to be better than the published Ikonos specifications. This paper also describes some important matters relating to sensor geometry models and pays special attention to a replacement math model called the rational polynomial coefficient (RPC) model. These coefficients were used to model Ikonos stereo images, and a special adjustable form of the RPC rational function was used in multi-strip block adjustments. Comparisons to ground control and the rigorous Ikonos geometry model are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.492928
An approach to conflation/registration of images that does not depend on identifying common points is being developed. It uses the method of algebraic invariants to provide a common set of coordinates to images using continuous chains of line segments formally described as polylines. It is shown the invariant algebraic properties of the polylines provide sufficient information to automate conflation. When there are discrepancies between the image data sets, robust measures of the possibility and quality of match (measures of correctness) are necessary. Decision making and the usability of the resulting conflation depends on such quality control measures. These measures may also be used to mitigate the effects of sensor and observational artifacts. This paper describes the theory of algebraic invariants and presents a conflation/registration method and measures of correctness of feature matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486059
Research is being conducted into the usefulness of hyperspectral data for geologic mapping applications. Hyperspectral data provides a means of identifying surface mineralogy, which indicates lithology. The data analyzed for this work were collected by the HYDICE (VIS-SWIR) and SEBASS (LWIR) airborne imaging spectrometers. Airborne spectrometers can deliver 1-meter spatial resolution, which allows for detailed geologic maps to be created. However, the first operational satellite-based hyperspectral systems will not deliver this level of detail. Data sets of 5, 10, 20, and 30 meters were simulated by degrading the 1-meter airborne data by block averaging of pixels. Presented will be a comparison of the effects of these lower resolutions on endmember identification and resultant geologic mapping. The hyperspectral-derived maps are directly compared to the best available ground-based geologic maps as a means of understanding how spatial resolution translates into map scale. In general, the results indicate that while spatial detail is rapidly lost as resolution degrades, spectral detail tends to be retained, which allows for accurate moderate-scale mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488936
This paper investigates if and how oversampling techniques can be applied in a useful manner to hyperspectral images. Oversampling occurs when the signal is sampled higher than the Nyquist frequency. If this occurs, the higher sampling rate can be traded for precision. Specifically, one bit of precision can be gained if the signal has been oversampled by a factor of four. This paper first investigates if spectral oversampling actually occurs in hyperspectral images, then looks at its usefulness in classification. Simulations were done with synthetic and real images. The results indicate that oversampling does occur for many real objects, so a knowledge of what is being searched for is crucial for knowing if oversampling techniques can be used. The classification results indicate that it takes a relatively large amount of noise for these techniques to have a significant impact on classification with synthetic images. For real images however, an improvement in classification for both supervised and unsupervised algorithms was observed for all simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487893
The theory of asymptotic eigenvalue distributions of sample covariance matrices has been applied to array processing and model identification problems that require characterization of signal and noise modes in vector-valued observations. It naturally applies in cases where the dimensionality of the observation space is large compared with the signal model order. A similar situation holds for most hyperspectral image observations. Hyperspectral data is frequently described in terms of a "signal" component composed of linear combinations of endmember basis spectra, plus random additive "noise" from the sensor and environment. The number of resolvable signal modes is typically much smaller than the number of spectral bands, and most of the orthogonal spectral dimensions generated by a principal components analysis are dominated by noise.
Analytical characterization of the "noise eigenmodes" of a hyperspectral data cube supports the development of objective methods for estimating image noise statistics, signal-to-noise ratio, and the complexity and content of the underlying spectral scene. This paper reviews some fundamental results in eigenvalue distribution theory for high-dimensional data, and explores potential applications of the theory to hyperspectral data analysis. Specific applications developed and illustrated in the paper include scene-based estimation of noise-equivalent spectral radiance (NESR), and automated selection of signal-bearing and noise-limited subspaces for spectral analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487060
The matched subspace filter (MSF) is a target detection algorithm implemented in Data Fusion Corporation's HYPER-TOOLS, a suite of hyperspectral image analysis tools. This generalized likelihood ratio test is designed to detect target signatures while suppressing known interference signatures in a hyperspectral image. The importance of interference suppression is illustrated in detection performance experiments in which spectral taggant panels are located in hyperspectral reflectance images. The MSF may also be successfully applied to data in which the spectra's continua have been removed in order to isolate spectral absorption features. A connection is established between the detection performance of the MSF and the subspace angle between the target and interference subspaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.488560
We develop new algorithms based on multiband correlation models for
the recognition of hyperspectral textures in three dimensions. The
dependence of the observed texture of a material sample on viewing and illumination angles can have varying degrees of complexity. The
bidirectional texture function (BTF) describes the appearance of a
textured surface as a function of the illumination and viewing
directions. The lack of appropriate hyperspectral image sets has limited attempts to characterize the BTF for 3D hyperspectral textures. In this paper, we use the DIRSIG model to generate a set of hyperspectral images over ranges of illumination and viewing
angles in the 0.4 to 2.5 spectral region. We evaluate the performance
of our methods for recognizing three-dimensional hyperspectral textures under unknown illumination angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.485962
The realtime implementation is discussed for several detection and classification techniques: Orthogonal Subspace Projection (OSP), Filter Vector Algorithm (FVA), Generalized Likelihood Ratio Test (GLRT), RX algorithm, Constrained Energy Minimization (CEM), Target Constrained Interference Minimization Filter (TCIMF), and Constrained Linear Discriminant Analysis (CLDA). Two data dimensionality limitations are met in realtime processing. One is the number of classes to be classified cannot be larger than data dimensionality, i.e., the number of spectral bands (for some techniques), and the other is the number of independent pixel vectors used for processing must be larger than the number of bands for a data sample correlation or covariance matrix with full rank (for some techniques). In this paper, we present methods to take care of these two limitations: the former is solved by generating artificial band images to expand the data dimensionality, while the latter is solved by using a positive definite correlation matrix as initial matrix. Experiments using hyperspectral data and multispectral data demonstrate the effectiveness of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486371
The main challenge for the retrieval of information using hyperspectral sensors is that due to the high dimensionality provided by them there is not comparably enough a priori data to produce well-estimated parameters to solve our detection problem. This lack of enough a priori information for an estimation yields to a rank-deficient problem. As a consequence, this leads to an increment in false alarms and increase in the probability of missing throughout the classification process. An approach based on a regularization technique applied to the data collected from the hyperspectral sensor is used to simultaneously minimize the probabilities of false alarms and missing. This procedure is implemented using algorithms that apply regularization techniques by biasing the covariance matrix, which enable the simultaneous reduction of the probability of false alarm and the decrease of the probability of missing; thus, enhancing the Maximum Likelihood parameter estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487297
A cloud cover detection algorithm was developed for application to EO-1 Hyperion hyperspectral data. The algorithm uses only bands in the reflected solar spectral regions to discriminate clouds from surface features and was designed to be used on-board the EO-1 satellite as part of the EO-1 Extended Mission Phase of the EO-1 Science Program. The cloud cover algorithm uses only 6 bands to discriminate clouds from other bright surface features such as snow, ice, and desert sand. The code was developed using 20 Hyperion scenes with varying cloud amount, cloud type, underlying surface characteristics and seasonal conditions. Results from the application of the algorithm to these test scenes is given with a discussion on the accuracy of the procedure used in the cloud cover discrimination. Compared to subjective estimates of the scene cloud cover, the algorithm was typically within a few percent of the estimated total cloud cover.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487495
Modern optical sensors can provide high quality multi/hyperspectral measurements of apparent radiance with high spectral and spatial resolution. A driving consideration if the full potential of this information stream be realized is mitigation of the effects of the --unavoidable -- uncertain elements which contribute to the radiance at aperture. These include atmospheric, geometric and shadowing effects, as well as variations in the optical properties of the target(s). Many techniques have recently been developed to address various aspects of this problem and much progress has been made. A general technique follows from calculation of a linear sub-space for the target vector, whose rank and extent reflects the uncertainty in the target radiance. With global coverage of thin cirrus exceeding eighty percent, it is natural to seek improved target detection performance for operational systems by anticipating the deleterious effects of thin clouds. We describe preliminary results for target detection within original and simulated hypercubes, with and without intervening thin clouds, using TSSP (Target Sub-Space Detection). The advantages of this technique for retrieval through realistic clouds -- non plane-parallel with spatially varying optical properties -- will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486713
The Littoral Airborne Sensor, Hyperspectral (LASH) system is a stabilized, hyperspectral pushbroom sensor capable of high-resolution imaging. We have implemented a sub-pixel detection algorithm based on stochastic mixing models and integrated this with the LASH hardware/software system for real-time operation and detection. Initial field tests have demonstrated reliable detection of high contrast targets down to the 30% sub-pixel level with false alarm rates less than ~ 10-7. The LASH sensor system thus provides a powerful tool for detecting small targets over large search areas, making it a valuable tool for search and rescue and a wide range of other applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487371
A multiframe selective information fusion technique derived from robust error estimation theory has versatile image processing applications, including those for multispectral information fusion and for obtaining a wide-angle focused synthetic frame integrating small angle focused regions in distinct video-frames. An example combining both applications is presented in the processing of a color video of a scene imaged through turbulent air. The three multispectral Red-Green-Blue components of each color image is first transformed into gray-scale images that are fused as independent video streams to achieve three focused wide-angle fields of view. The multispectral fusion of the resulting synthetic RGB frames, using the same multiframe information fusion technique, combines all significant details from the three multispectral synthetic frame into a single synthetic output frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.486810
We present an overview of the Naval Postgraduate School's new LINUS instrument. This is a spectral imager designed to observe atmospheric gas plumes by means of absorption spectroscopy, using background Rayleigh-scattered daylight as an illumination source. It is a pushbroom instrument, incorporating a UV-intensified digital camera, interchangeable gratings and filters, and a DC servo-controlled image scanning system. LINUS has been developed to operate across both the near-ultraviolet and the short visible wavelength portions of the spectrum in overlapping passbands. This paper provides an outline of LINUS's design, operation and capabilities, and it summarizes results from initial laboratory and field trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calibration for Remote Spectrometry and Radiometry
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487098
Hyperspectral imaging (HSI) sensors collect spatially resolved data in hundreds of spectral channels. While the technology matures and finds broad applications, data downlink from the collection platform and near real-time processing remain key challenges, especially for near-term spaceborne sensors. It is desirable to process the data on-board for near real-time analysis and downlink compressed data allowing near full spectral recovery for post-mission analysis. Principal component analysis (PCA) can be used to determine the reduced dimensionality and separate noise components in the data. While PCA is useful for image feature analysis such as smoke/cloud discrimination (Griffin, et al., 2000), it can also be used as a data compression tool. With PCA, the majority of information in an HSI data cube is effectively compressed to a small number of principal components. The data volume is significantly reduced while the feature contrast is enhanced. Spectral information can be recovered from the compressed data with minimal loss. In this paper, the reconstructed data are compared to the original "truth" data with difference analysis using sample AVIRIS imagery. This methodology also allows for the HSI data to be used adaptively for various multispectral band simulations without the constraint of data volume and processing burden. Based on AVIRIS data, emulation of MODIS sensor bands are carried out and compared with the PCA-reconstructed data. Two products are also derived and compared: Normalized Difference Vegetation Index (NDVI) and the integrated column water vapor (CWV) using the full set of AVIRIS data and the reconstructed spectral information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487461
We present a Gaussain model based approach for robust and automatic extraction of roads from very low-resolution satellite imagery. First, the input image is filtered to suppress the regions that the likelihood of existing a road pixel is low. Then, the road magnitude and orientation are computed by evaluating the responses from a quadruple orthogonal line filter set. A mapping from the line domain to the vector domain is used to determine the line strength and orientation for each point. A Gaussian model is fitted to point and matching models are updated recursively. The iterative process consists of finding the connected road points, fusing them with the previous image, passing them through the directional line filter set and computing new magnitudes and orientations. The road segments are updated at each iteration, and the process continues until there are no further changes in the roads extracted. Experimental results demonstrate the success of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487468
High-dimensional spectral imageries obtained from multispectral, hyperspectral or even ultraspectral bands generally provide complementary characteristics and analyzable information. Synthesis of these data sets into a composite image containing such complementary attributes in accurate registration and congruence would provide truly connected information about land covers for the remote sensing community. In this paper, a novel feature selection algorithm applied to the greedy modular eigenspaces (GME) is proposed to explore a multi-class classification technique using data fused from data gathered by the MODIS/ASTER airborne simulator (MASTER) and the Airborne Synthetic Aperture Radar (AIRSAR) during the Pacrim II campaign. The proposed approach, based on a synergistic use of these fused data, represents an effective and flexible utility for land cover classifications in earth remote sensing. An optimal positive Boolean function (PBF) based multi-classifier is built by using the labeled samples of these data as the classifier parameters in a supervised training stage. It utilizes the positive and negative sample learning ability of minimum classification error criteria to improve the classification accuracy. It is proved that the proposed method improves the precision of image classification significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.499604
With its combination of good spatial and spectral resolution, visible to near infrared spectral imaging from aircraft or spacecraft is a highly valuable technology for remote sensing of the earth's surface. Typically it is desirable to eliminate atmospheric effects on the imagery, a process known as atmospheric correction. In this paper we review the basic methodology of first-principles atmospheric correction and present results from the latest version of the FLAASH (Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes) algorithm. We show some comparisons of ground truth spectra with FLAASH-processed AVIRIS data, including results obtained using different processing options, and with results from the ACORN algorithm that derive from an older MODTRAN4 spectral database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX, (2003) https://doi.org/10.1117/12.487564
ATIS Company is developing a new optical remote sensor based on the GALAAD worldwide patented multispectral imaging technology registered in 1998. The developed prototype has demonstrated its capabilities as an infrared imaging spectrometer as it allows real-time remote detection, monitoring and localization of hazardous gas plumes. Based on a novel concept of imaging spectrometer, this technology uses an un-cooled IRFPA (ULIS 320 x 240 bolometer) to reduce significantly the production and running costs of the device. The core of the GALAAD technology is a dynamic continuous spectral filtering mechanism, which, opposite to standard spectrometers, builds an entire spectrum for each pixel with a spectral resolution down to 25 nm. This technology works for any gas having an absorption spectrum in the sensitivity band of the sensor (here 7 - 14 micron region). The processing of the resulting multi spectral images allows the detection and identification of the gases present in the atmosphere using their specific infrared signature. The concentration is quantified and overlaid directly on the infrared image to monitor the site in a clear and intelligible way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.