PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804801 (2011) https://doi.org/10.1117/12.900893
This PDF file contains the front matter associated with SPIE Proceedings Volume 8048, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804802 (2011) https://doi.org/10.1117/12.881447
The purpose of this paper is to introduce a general type of detection fusion that allows combining a set of basic detectors
into one, more versatile, detector. The fusion can be performed based on the spectral information contained in a pixel,
global characteristics of the background and target spaces, as well as spatial local information. The new approach shown
in this paper is especially promising in the context of recent geometric and topological approaches that produce complex
structures for the background and target spaces.
We show specific examples of generalized fusion and present some results on false alarm rates and probabilities of
detection of fused detectors. We show that continuum fusion is a special case of generalized fusion. Our new framework
allows better understanding of continuum fusion, as well as other useful types of fusion, such as discrete fusion proposed
in this paper. We also explain the relationship between the generalized likelihood-ratio detectors and various fusion
detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804803 (2011) https://doi.org/10.1117/12.883497
Continuum fusion methods define a new design approach for multivariate detection algorithms, hyperspectral applications
being only one example. However, the high dimensions in which such detectors operate can challenge human intuition.
We show how certain low-dimensional representations can be used to understand the performance of many existing
discrimination algorithms, with special emphasis on newer CF methods. We also give examples illustrating how the interplay
between analytical and geometrical interpretations can be used to inform the process of designing special purpose
detectors, such as for eliminating sensor artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804804 (2011) https://doi.org/10.1117/12.884337
The potential of a new class of detection algorithms is demonstrated on an object of practical interest. The continuum
fusion (CF) [1] methodology is applied to a linear subspace model. A new algorithm results from first invoking a fusion
interpretation of a conventional GLR test and then modifying it with CF methods. Usual performance is enhanced in
two ways. First the Gaussian clutter model is replaced by a Laplacian distribution, which is not only more realistic in its
tail behavior but, when used in a hypothesis test, also creates decision surfaces more selective than the hyperplanes
associated with linear matched filters. Second, a fusion flavor is devised that generalizes the adaptive coherence
estimator (ACE) [2, 3] algorithm but has more design flexibility. An IDL/ENVI user interface has been developed and
will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804805 (2011) https://doi.org/10.1117/12.886411
Hyperspectral imaging is particular useful in remote sensing to identify a small number of unknown man-made
objects in a large natural background. An algorithm for detecting such anomalies in hyperspectral imagery is
developed in this article. The pixel from a data cube is modeled as the sum of a linear combination of unknown
random variables from the clutter subspace and a residual. Maximum likelihood estimation is used to estimate
the coecients of the linear combination and covariance matrix of the residual. The Mahalanobis distance of
the residual is dened as the anomaly detector. Experimental results obtained using a hyperspectral data cube
with wavelengths in the visible and near-infrared range are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804806 (2011) https://doi.org/10.1117/12.883265
A new method for hyperspectral change detection derived from a parametric radiative transfer model was recently
developed. The model-based approach explicitly accounts for local illumination variations, such as shadows,
which act as a constant source of false alarms in traditional change detection techniques. Here we formally
derive the model-based approach as a generalized likelihood ratio test (GLRT) developed from the data model.
Additionally, we discuss variations on implementation techniques for the algorithm and provide results using
tower-based data and HYDICE data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804807 (2011) https://doi.org/10.1117/12.883326
The challenge of finding small targets in big images lies in the characterization of the background clutter. The
more homogeneous the background, the more distinguishable a typical target will be from its background. One
way to homogenize the background is to segment the image into distinct regions, each of which is individually
homogeneous, and then to treat each region separately. In this paper we will report on experiments in which the
target is unspecified (it is an anomaly), and various segmentation strategies are employed, including an adaptive
hierarchical tree-based scheme. We find that segmentations that employ overlap achieve better performance in
the low false alarm rate regime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804808 (2011) https://doi.org/10.1117/12.884503
Change detection with application to wide-area search seeks to identify where interesting activity has occurred
between two images. Since there are many different classes of change, one metric may miss a particular type of
change. Therefore, it is potentially beneficial to select metrics with complementary properties. With this idea
in mind, a new change detection scheme was created using mean-shift and outlier-distance metrics. Using these
metrics in combination should identify and characterize change more completely than either individually. An
algorithm using both metrics was developed and tested using registered sets of multispectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804809 (2011) https://doi.org/10.1117/12.883574
Many spectral algorithms that are routinely applied to spectral imagery are based on the following models:
statistical, linear mixture, and linear subspace. As a result, assumptions are made about the underlying distribution
of the data such as multivariate normality or other geometric restrictions. Here we present a graph based
model for spectral data that avoids these restrictive assumptions and apply graph based metrics to quantify
certain aspects of the resulting graph. The construction of the spectral graph begins by connecting each pixel to
its k-nearest neighbors with an undirected weighted edge. The weight of each edge corresponds to the spectral
Euclidean distance between the adjacent pixels. The number of nearest neighbors, k, is chosen such that the
graph is connected i.e., there is a path from each pixel xi to every other. This requirement ensures the existence
of inter-cluster connections which will prove vital for our application to change detection. Once the graph
is constructed, we calculate a metric called the Normalized Edge Volume (NEV) that describes the internal
structural volume based on the vertex connectivity and weighted edges of the graph. Finally, we demonstrate
a graph based change detection method that applies this metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480A (2011) https://doi.org/10.1117/12.883197
A Telops Hyper-Cam Fourier-transform spectrometer (IFTS) was used to collect infrared hyper-spectral imagery of
the smokestack plume from a coal-burning power facility to assess the influence of turbulence on spectral retrieval of
temperature (T) and pollutant concentrations (Ci ). The mid-wave (1.5-5.5 μm) system features a 320x256 InSb focal-plane
array with a 326 μrad instantaneous field-of-view (IFOV). The line-of-sight distance to the 76mtall smokestack exit
was 350m(11.4 x 11.4 cm2 IFOV). Approximately 5000 interferogram cubes were collected in 30 minutes on a 128x128
pixel window corresponding to a spectral resolution of 20 cm-1. Radiance fluctuations due to plume turbulence were
observed on a time scale much shorter than hyper-spectral image acquisition rate, suggesting scene change artifacts
(SCA) would be present in the Fourier-transformed spectra. Time-averaging the spectra minimized SCA magnitudes, but
accurate T and Ci retrieval requires a priori knowledge of the statistical distribution of temperature and other stochastic
flow field parameters. A method of quantile sorting in interferogram space prior to Fourier-transformation is presented
and used to identify turbulence throughout the plume. Immediately above the stack exit, T and CO2 concentration
estimates from the median spectrum are 395 K and 6%, respectively, which compare well to in situ measurements.
Turbulence is small above the stack exit and introduced systematic errors in T and Ci on the order of 0.5 K and 0.01%,
respectively. In some plume locations, turbulent fluctuations introduced errors in T and Ci on the order of 8 K and
1%, respectively. While more complicated radiance fluctuations precluded straightforward retrieval of the temperature
probability distribution, the results demonstrate the utility of additional information content associated with multiple
interferogram quantiles and suggest IFTS may find use as a tool for non-intrusive flow field analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480B (2011) https://doi.org/10.1117/12.884167
In the task of automated anomaly detection, it is desirable to find regions within imagery that contain man-made structures
or objects. The task of separating these signatures from the scene background and other naturally occurring anomalies
can be challenging. This task is even more difficult when the spectral signatures of the man-made objects are designed to
closely match the surrounding background. As new sensors emerge that can image both spectrally and polarimetrically, it
is possible to utilize the polarimetric signature to discriminate between many types of man-made and natural anomalies.
One type of passive imaging system that allows for spetro-polarimetric data to be collected is the pairing of a liquid crystal
tunable filter (LCTF) with a CCD camera thus creating a spectro-polarimetic imager (SPI). In this paper, an anomaly
detection scheme is implemented which makes use of the spectral Stokes imagery collected by this sensing system. The
ability for the anomaly detector to find man-made objects is assessed as a function of the number of spectral bands available
and it is shown that low false alarm rates can be achieved with relatively few spectral bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frank M. Mindrup, Mark A. Friend, Kenneth W. Bauer
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480C (2011) https://doi.org/10.1117/12.884120
There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design
(RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable
of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral
images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI
by considering specific image characteristics as noise variables within the context of RPD; these characteristics
include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for
selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise
features. These subsets are not necessarily orthogonal, but still provide improvements over random training and
test subset assignments by maximizing the volume and average distance between image noise characteristics.
Several different mathematical models representing the value of a training and test set based on such measures
as the D-optimal score and various distance norms are tested in a simulation experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480D (2011) https://doi.org/10.1117/12.885507
In this paper we present a new methodology for automated target detection and identification in hyperspectral
imagery. The standard paradigm for target detection in hyperspectral imagery is to run a detection algorithm,
typically statistical in nature, and visually inspect each high-scoring pixel to decide whether it is a true detection
or a false alarm. Detection filters have constant false alarm rates (CFARs) approaching 10-5, but these can
still result in a large number of false alarms given multiple images and a large number of target materials. Here
we introduce a new methodology for target detection and identification in hyperspectral imagery that shows
promise for hard targets. The result is a greatly reduced false alarm rate and a practical methodology for aiding
an analyst in quantitatively evaluating detected pixels. We demonstrate the utility of the method with results
on data from the HyMap sensor over the Cooke City, MT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480E (2011) https://doi.org/10.1117/12.885645
Analyzing flow-like patterns in images for image understanding is an active research area but there have been much less
attention paid to the process of enhancement of those structures. The completion of interrupted lines or the enhancement
of flow-like structures is known as Coherence-Enhancement (CE). In this work, we are studying nonlinear anisotropic
diffusion filtering for coherence enhancement. Anisotropic diffusion is commonly used for edge enhancement by
inhibiting diffusion in the direction of highest spatial fluctuation. For CE, diffusion is promoted along the direction of
lowest spatial fluctuation in a neighborhood thereby taking into account how strongly the local gradient of the structures
in the image is biased towards that direction. Results of CE applied multispectral and hyperspectral images are
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480F (2011) https://doi.org/10.1117/12.883435
The small island nation of Haiti was devastated in early 2010 following a massive 7.0 earthquake that brought about
widespread destruction of infrastructure, many deaths and large-scale displacement of the population in the nation's
major cities. The World Bank and ImageCat, Inc tasked the Rochester Institute of Technology's (RIT) Wildfire Airborne
Sensor Platform (WASP) to gather a multi-spectral and multi-modal assessment of the disaster over a seven-day period
to be used for relief and reconstruction efforts.
Traditionally, private sector aerial remote sensing platforms work on processing and product delivery timelines
measured in days, a scenario that has the potential to reduce the value of the data in time-sensitive situations such as
those found in responding to a disaster. This paper will describe the methodologies and practices used by RIT to deliver
an open set of products typically within a twenty-four hour period from when they were initially collected.
Response to the Haiti disaster can be broken down into four major sections: 1) data collection and logistics, 2)
transmission of raw data from a remote location to a central processing and dissemination location, 3) rapid image
processing of a massive amount of raw data, and 4) dissemination of processed data to global organizations utilizing it to
provide the maximum benefit. Each section required it's own major effort to ensure the success of the overall mission. A
discussion of each section will be provided along with an analysis of methods that could be implemented in future
exercises to increase efficiency and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480G (2011) https://doi.org/10.1117/12.884054
The Information Products Laboratory for Emergency Response (IPLER) is a new initiative led by the Rochester Institute
of Technology (RIT) to develop and put into use new information products and tools derived from remote sensing data.
This effort involves technical development and outreach to the user community having the two-fold objective of
providing new information tools to enhance public safety and fostering economic development.
Specifically, this paper addresses the demonstration of the collection and delivery of geo-referenced overhead imagery to
local (county level) emergency managers in near realtime. The demonstration proved valuable to county personnel in
showing what is possible and valuable to the researchers in highlighting the very real constraints of operatives in local
government.
The demonstration consisted of four major elements; 1) a multiband imaging system incorporating 4 cameras operating
simultaneously in the visible (color), shortwave infrared, midwave infrared and long wave infrared, 2) an on-board
inertial navigation and data processing system that renders the imagery into geo-referenced coordinates, 3) a microwave
digital downlink, and 4) a data dissemination service via FTP and WMS-based browser.
In this particular exercise, we successfully collected and downloaded over 700 images and delivered them to county
servers located in their Emergency Operations Center as well as to a remote GIS van.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480H (2011) https://doi.org/10.1117/12.887055
On April 28, 2010, the Environmental Protection Agency's (EPA) Airborne Spectral Photometric
Environmental Collection Technology (ASPECT) aircraft was deployed to Gulfport, Mississippi to provide
airborne remotely sensed air monitoring and situational awareness data and products in response to the
Deepwater Horizon oil spill disaster. The ASPECT aircraft was released from service on August 9, 2010 after
having flown over 85 missions that included over 325 hours of flight operation. This paper describes several
advanced analysis capabilities specifically developed for the Deepwater Horizon mission to correctly locate,
identify, characterize, and quantify surface oil using ASPECT's multispectral infrared data. The data products
produced using these advanced analysis capabilities provided the Deepwater Horizon Incident Command with
a capability that significantly increased the effectiveness of skimmer vessel oil recovery efforts directed by
the U.S. Coast Guard, and were considered by the Incident Command as key situational awareness
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480I (2011) https://doi.org/10.1117/12.884001
The conservation and efficient use of natural and especially strategic resources like oil and water have become global
issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To
effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry,
automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required.
One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is
to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints")
incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers
must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number
of fluorescent markers that can be reliably distinguished within the limited wavelength band available.
In this article we shall investigate the reliable detection and classification of fluorescent markers with specific
fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired
from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that
can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral
similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall
measurement system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480J (2011) https://doi.org/10.1117/12.884627
This paper presents the Image Mapping Spectrometry a new snapshot hyperspectral imaging platform for variety of
applications. These applications span from remote sensing and surveillance use to life cell microscopy implementations
and medical diagnostics. The IMS replaces the camera in a digital imaging system, allowing one to add parallel spectrum
acquisition capability and to maximize the signal collection (> 80%). As such the IMS allows obtaining full spectral
information in the image scene instantaneously at real time imaging rates. Presented implemention provides 350x350x48
datacube (x,y,λ) and spectral sampling of 2 to 6 nm in visible spectral range but is easily expandable to larger cube
dimensions and other spectral ranges. The operation of the IMS is based on redirecting image zones through the use of a
custom-fabricated optical element known as an image mapper. The image mapper is a complex custom optical
component comprised of high quality, thin mirror facets with unique 2D tilts. These mirror facets reorganize the original
image onto a single large format CCD sensor to create optically "dark" regions between adjacent image lines. The full
spectrum from each image line is subsequently dispersed into the void regions on the CCD camera. This mapping
method provides a one-to-one correspondence between each voxel in the datacube and pixel on the CCD camera
requiring only a simple and fast remapping algorithm. This paper provides fundamentals of IMS operations and
describes an example design. Preliminary imaging results for gas detection acquired at 3 frames / second, for
350x350x48 data cubes are being presented. Real time unmixing of spectral signatures is also being discussed. Finally
paper draws perspective of future directions and system potential for infrared imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480K (2011) https://doi.org/10.1117/12.886534
We demonstrate a Fourier transform spectrometer (FTS) using a Fabry-Perot interferometer with the gap between its
partially reflecting layers varying orthogonal to the optical axis to produce a gradient in optical path difference at a
detector. The gradient produces a period fringe pattern that can be analyzed with standard FTS techniques. Experiments
in the visible and IR demonstrate the feasibility of this method for spectroscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480L (2011) https://doi.org/10.1117/12.887283
The EMAS-HS or Enhanced MODIS Airborne Simulator is an upgrade to the solar reflected and thermal infrared
channels of NASA's MODIS Airborne Simulator (MAS). In the solar reflected bands, the MAS scanner functionality
will be augmented with the addition of this separate pushbroom hyperspectral instrument. As well as increasing the
spectral resolution of MAS beyond 10 nm, this spectrometer is designed to maintain a stable calibration that can be
transferred to the existing MAS sensor. The design emphasizes environmental control and on-board radiometric stability
monitoring. The system is designed for high-altitude missions on the ER-2 and the Global Hawk platforms. System
trades optimize performance in MODIS spectral bands that support land, cloud, aerosol, and atmospheric water studies.
The primary science mission driving the development is high altitude cloud imaging, with secondary missions possible
for ocean color.
The sensor uses two Offner spectrometers to cover the 380-2400 nm spectral range. It features an all-reflective telescope
with a 50° full field-of-view. A dichroic cold mirror will split the image from the telescope, with longer radiation
transmitted to the SWIR spectrometer. The VNIR spectrometer uses a TE-cooled Si CCD detector that samples the
spectrum at 2.5 nm intervals, while the SWIR spectrometer uses a Stirling-cooled hybrid HgCdTe detector to sample the
spectrum at 10 nm per band. Both spectrometers will feature 1.05 mRad instantaneous fields-of-view registered to the
MAS scanner IFOV's.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480M (2011) https://doi.org/10.1117/12.883822
A standing wave spectrometer is turned into a wavelength tunable band-pass filter by the addition of a reflective coating.
It results in the standing wave filter (SWF), a miniaturized Fabry-Perot band-pass filter with a semi-transparent detector
that can be constructed into a pixel-tunable focal plane array, suitable for hyperspectral imaging applications. The
asymmetric Fabry-Perot cavity is formed between the reflective coating and a tunable mirror, originally part of the
spectrometer. The predicted performance of the SWF is optimized through modeling based on the matrix formalism used
in thin film optics and with FDTD simulations. The SWF concept is taken from an ideal device to a focal plane array
design that was fabricated with 40 micron pixels using semi-conductor processing technology. First-light spectra
measured from the 100 pixel Standing Wave Filter array agree with predictions and prove the concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480N (2011) https://doi.org/10.1117/12.884322
Current development of optical sensors has lead to their increased utility and potential. Applications for these imagers
encompass not just single regions of the electromagnetic spectrum but indeed all parts of the thermal radiation spectrum,
ultraviolet through long-wave infrared, indicative for instance of Earth's atmosphere. Accordingly, these multispectral
imagers mandate the development of entirely new test methods and test hardware to measure and calibrate the
benchmarks of their performance; such as SNR, uniformity, sensitivity, linearity, and dynamic range. The role of the test
hardware is thus driven not only to provide high-resolution, uniform, and stable output but also to provide multispectral
output to minimize the amount of measurement equipment required and in order to demonstrate their full functionality.
Multispectral imagers require that test hardware be capable of producing an output that matches high daylight down
through low light/starlight irradiance levels. This paper explores the characterization, testing, and advantages and
drawbacks of various types of multispectral sources spanning UV through SWIR over a high dynamic range of output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480O (2011) https://doi.org/10.1117/12.884668
With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance
increase with each generation of new video cards. While these cards were designed primarily for visualization and video
games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing
Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers.
It has been found that many image processing problems scale well to modern GPU systems. We have implemented four
popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly
detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special
cases showing extreme speedups of a hundred times or more.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480P (2011) https://doi.org/10.1117/12.890851
A significant topic in many image processing systems is the derivation of a threshold to
actuate the automated analysis of outputs from spectral filters and/or anomaly filters, the
detection of targets and/or classes of objects which are different than the local background
clutter. There are cases where the signals of interest have contrast locally against their
immediate surroundings but the application of a global threshold over the entire image
produces poor results with missed detections and numerous false alarms. In such cases an
adaptive or local threshold operator offers a more robust solution.
One local threshold function is the conditional dilation which produces a reference image via
a series of dilations which are conditioned on not exceeding the signal levels in the original
image. In the limit this reference image becomes a threshold surface where only areas or
objects exhibiting contrast locally remain after application of the threshold. Algorithms have
been introduced which enable use of conditional dilation in realtime systems by reducing the
unbounded series of dilations to a small, fixed number of operations. In the present work we
present an adaptation of this algorithm to both single CPU systems and also to systems which
incorporate a GPGPU device which enables a highly parallel version of the algorithm subject
to the unique architecture constraints of the GPGPU. Execution timings for comparison are
introduced: The GPGPU offers somewhat better performance than the single CPU system
despite the GPGPU architecture not being suitable for implementation of a neighborhood
process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480Q (2011) https://doi.org/10.1117/12.885621
Hyperspectral sensors can collect hundreds of images taken at different narrow and contiguously spaced spectral
bands. This high-resolution spectral information can be used to identify materials and objects within the field
of view of the sensor by their spectral signature, but this process may be computationally intensive due to
the large data sizes generated by the hyperspectral sensors, typically hundreds of megabytes. This can be
an important limitation for some applications where the detection process must be performed in real time
(surveillance, explosive detection, etc.). In this work, we developed a parallel implementation of three state-ofthe-
art target detection algorithms (RX algorithm, matched filter and adaptive matched subspace detector) using
a graphics processing unit (GPU) based on the NVIDIA® CUDA™ architecture. In addition, a multi-core CPUbased
implementation of each algorithm was developed to be used as a baseline for the speedups estimation. We
evaluated the performance of the GPU-based implementations using an NVIDIA ® Tesla® C1060 GPU card, and
the detection accuracy of the implemented algorithms was evaluated using a set of phantom images simulating
traces of different materials on clothing. We achieved a maximum speedup in the GPU implementations of
around 20x over a multicore CPU-based implementation, which suggests that applications for real-time detection
of targets in HSI can greatly benefit from the performance of GPUs as processing hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480R (2011) https://doi.org/10.1117/12.884767
Manifold learning, also called nonlinear dimensionality reduction, affords a way to understand and visualize the structure
of nonlinear hyperspectral datasets. These methods use graphs to represent the manifold topology, and use metrics like
geodesic distance, allowing embedding higher dimension objects into lower dimension. However the complexities of
some manifold learning algorithms are O(N3), therefore they are very slow (high computational algorithms). In this
paper we present a CUDA-based parallel implementation of the three most popular manifold learning algorithms like
Isomap, Locally linear embedding, and Laplacian eigenmaps, using CUDA multi-thread model. The result of this
dimensionality reduction was employed in segmentation using active contours as an application of these reduced
hyperspectral images. The manifold learning algorithms were implemented on a 64-bit workstation equipped with a
quad-core Intel® Xeon with 12 GB RAM and two NVIDIA Tesla C1060 GPU cards. Manifold learning outperforms
significantly and achieve up to 26x speedup. It also shows good scalability where varying the size of the dataset and the
number of K nearest neighbors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480S (2011) https://doi.org/10.1117/12.885069
The paper describes the georeferencing part of an airborne hyperspectral imaging system based on pushbroom scanning.
Using ray-tracing methods from computer graphics and a highly efficient representation of the digital elevation model
(DEM), georeferencing of high resolution pushbroom images runs in real time by a large margin. By adapting the
georeferencing to match the DEM resolution, the camera field of view and the flight altitude, the method has potential to
provide real time georeferencing, even for HD video on a high resolution DEM when a graphics processing unit (GPU)
is used for processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480T (2011) https://doi.org/10.1117/12.882191
"ProSpecTIR" Imaging spectrometer (hyperspectral imagery or "HSI") data were collected for the city of Las Vegas,
Nevada, USA at 10:55 PM July 28, 2009 for the purposes of identification, characterization, and mapping of urban
lighting based on spectral emission lines unique to specific lighting types. The ProSpecTIR sensor measures the
spectrum in 360 spectral bands between 0.4 and 2.5 micrometers at approximately 5nm spectral resolution, and for this
flight, at 1.2m spatial resolution. Spectral features were extracted from the data and compared to a spectral library of
known lighting measurements. Specific lighting types identified based on spectral signatures using the ProSpecTIR data
included blue and red neon, high pressure sodium, and metal halide lights. A binary encoding method was used to map
the spatial distribution of lighting types based on simplified spectral signatures. Results were overlain on a Quickbird
panchromatic 0.6m spatial resolution image. The observed locations of specific light types were compared to a 3-D Las
Vegas building model, and airborne signatures validated against spectral library measurements. The ProSpecTIR data
successfully identified and mapped different lighting types and distributions, allowing determination of the nature and
spatial associations of specific lights. Results illustrate the potential for using imaging spectrometer data to characterize
urban development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480U (2011) https://doi.org/10.1117/12.885632
Understanding the capabilities of satellite sensors with spatial and spectral characteristics similar to those of MODIS for
Maritime Domain Awareness (MDA) is of importance because of the upcoming NPOES with 100 minutes revisit time
carrying the MODIS-like VIIRS multispectral imaging sensor. This paper presents an experimental study of ship
detection using MODIS imagery. We study the use of ship signatures such as contaminant plumes in clouds and the
spectral contrast between the ship and the sea background for detection. Results show the potential and challenges for
such approach in MDA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480V (2011) https://doi.org/10.1117/12.887076
This paper presents a new method for finding the direction of a dust storm in satellite images including the 5-band
NOAA-AVHRR imagery that were used in our previous work. The previous methods for obtaining the prominent
direction of the dust storms involved the combination of edge detectors and local spectral-domain classification
techniques applied to subimages/blocks. These approaches produced promising results but have the limitation of not
providing consistent results among the subimages that overlap the dust storm region. In this paper, other algorithms
like wavelets and state-of-the-art directional filters, based on the contourlet transform, are used to help us determine
the direction with more precision and consistency among the relevant subimages.
Before applying the directional filtering to the candidate region of the multispectral image, a preprocessing step
involves passing the image through a nonsubsampled pyramid selective amplification, this preprocessing step is
required in order to enhance the image and improve its directional streaks, in turn, this will help improve the
performance of the directional filter to get better and more consistent results. For AVHRR images, our methodology
involves applying directional filtering on bands 4 or 5 since these wavelengths highlight the absorption and
subsequent emission of thermal radiation by the silicate particles in the dust storms. Directional filtering is applied
on these image bands at different angles where energy measurements are computed to find the prominent direction
of the dust storm. The presence of a prominent direction in the texture of the candidate region of the dust storm can
be used as a verification of its presence in an automated detection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480W (2011) https://doi.org/10.1117/12.883572
Worldview-2 imagery acquired over Duck, NC and Camp Pendleton, CA were analyzed to extract Bidirectional
Reflectance Distribution Functions (BRDF) for 8 visible/near-infrared spectral bands. Images were acquired
at 15 azimuth/elevation positions at ten-second intervals during the Duck, NC orbit pass. Ten images were
acquired over Camp Pendleton, CA. Orthoready images were coregistered using first-order polynomials for the
two image sequences. BRDF profiles have been created for various scene elements. MODTRAN simulations
are presented to illustrate atmospheric effects under varying collection geometries. Results from analysis of the
Camp Pendleton, CA data are presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480X (2011) https://doi.org/10.1117/12.883029
This paper describes a novel approach for the detection and classification of man-made objects using discriminating
features derived from higher-order spectra (HOS), defined in terms of higher-order moments of hyperspectral-signals.
Many existing hyperspectral analysis techniques are based on linearity assumptions. However, recent research suggests
that significant nonlinearity arises due to multipath scatter, as well as spatially varying atmospheric water vapor
concentrations. Higher-order spectra characterize subtle complex nonlinear dependencies in spectral phenomenology of
objects in hyperspectral data and are insensitive to additive Gaussian noise. By exploiting these HOS properties, we have
devised a robust method for classifying man-made objects from hyerspectral signatures despite the presence of strong
background noise, confusers with spectrally similar signatures and variable signal-to-noise ratios. We tested
classification performance hyperspectral imagery collected from several different sensor platforms and compared our
algorithm with conventional classifiers based on linear models. Our experimental results demonstrate that our HOS
algorithm produces significant reductions in false alarms. Furthermore, when HOS-based features were combined with
standard features derived from spectral properties, the overall classification accuracy is substantially improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480Y (2011) https://doi.org/10.1117/12.883528
In this paper, we present a new approach to filtering high spatial resolution multispectral (MSI) or hyperspectral
imagery (HSI) for the purpose of classification and segmentation. Our approach is inspired by the bilateral
filtering method that smooths images while preserving important edges for gray-scale and color images. To
achieve a similar goal for MSI/HSI, we build a nonlinear tri-lateral filter that takes into account both spatial
and spectral similarities. Our approach works on a pixel by pixel basis; the spectrum of each pixel in the filtered
image is the combination of the spectra of its adjacent pixels in the original image weighted by the three factors:
geometric closeness, spectral Euclidean distance and spectral angle separation. The approach reduces small
clutter across the image while keeping edges with strong contrast. The improvement of our method is that
we use both spectral intensity differences together with spectral angle separation as the closeness metric, thus
preserving edges caused both by material as well as by similar materials with intensity differences. A k-means
classifier is applied to the filtered image and the results show our approach can produce a much less cluttered
class map. Results will be shown using imagery from the Digital Globe Worldview-2 multispectral sensor and
the HYDICE hyperspectral sensor. This approach could also be expanded to facilitate feature extraction from
MSI/HSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80480Z (2011) https://doi.org/10.1117/12.884146
Automatic clustering of spectral image data is a common problem with a diverse set of desired and potential solutions.
While typical clustering techniques use first order statistics and Gaussian models, the method described in this paper
utilizes the spectral data structure to generate a graph representation of the image and then clusters the data by applying
the method of optimal modularity for finding communities within the graph. After defining and identifying pixel
adjacencies to represent an image as an adjacency matrix, a recursive splitting is performed to group spectrally similar
pixels using the method of modularity maximization. The careful selection of pixel adjacencies determines the success of
this spectral clustering technique. The modularity maximization process uses the eigenvector of the modularity matrix
with the largest positive eigenvalue to split groups of pixels with non-linear decision surfaces and uses the modularity
measure to help estimate the optimal number of clusters to best characterize the data. Using information from each
recursion, the end result is a variable level of detail cluster map that is more visually useful than previous methods.
Additionally, this method outperforms many typical automatic clustering methods such k-means, especially in highly
cluttered urban scenes. The optimal modularity technique hierarchically clusters spectral image data and produces results
that more reliably characterize the number of clusters in the data than common automatic spectral image clustering
techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804810 (2011) https://doi.org/10.1117/12.883466
The current extent of publicly available space-based imagery and data products is unprecedented. Data from research
missions and operational environmental programs provide a wealth of information to global users, and in many cases,
the data are accessible in near real-time. The availability of such data provides a unique opportunity to investigate how
information can be cascaded through multiple spatial, spectral, radiometric, and temporal scales. A hierarchical image
classification approach is developed using multispectral data sources to rapidly produce large area landuse identification
and change detection products. The approach derives training pixels from a coarser resolution classification product to
autonomously develop a classification map at improved resolution. The methodology also accommodates parallel
processing to facilitate analysis of large amounts of data.
Previous work successfully demonstrated this approach using a global MODIS 500 m landuse product to construct a
30 m Landsat-based classification map. This effort extends the previous approach to high resolution U.S. commercial
satellite imagery. An initial validation study is performed to document the performance of the algorithm and identify
limitations in the process. Results indicate this approach is scalable and has broad applications to target and anomaly
detection applications. In addition, discussion is focused on how information is preserved throughout the processing
chain, as well as situations where the data integrity could break down. This work is part of a larger effort to deduce
practical, innovative, and alternative ways to leverage and exploit the extensive low-resolution global data archives to
address relevant civil, environmental, and defense objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804811 (2011) https://doi.org/10.1117/12.884230
The Multi-class Convex-FUMI (Multi-class C-FUMI) method is developed and described. The method is
capable of learning prototypes for multiple target classes from hyperspectral imagery. Multi-class C-FUMI is
a non-traditional supervised learning method based on the Functions of Multiple Instances (FUMI) concept.
The FUMI concept differs significantly from traditional supervised by the assumption that only functions of
target patterns are available. Moreover, these functions are likely to involve other non-target patterns. In
this paper, data points which are convex combinations of multiple target and several non-target prototypes
are considered. Multi-class C-FUMI learns the target and non-target patterns, the number of non-target
patterns, and the weights (or proportions) of all the prototypes for each data point. For hyperspectral image
analysis, the target and non-target prototypes estimated using Multi-class C-FUMI are the endmembers for
the target and non-target (background) materials. For this method, training data need only binary labels
indicating whether a data point contains or does not contain some proportion of a target endmember; the
specific target proportions for the training data are not needed. After learning the target prototype using the
binary-labeled training data, target detection is performed on test data. Results showing sub-pixel target
detection on highly mixed simulated hyperspectral data generated from the ASTER spectral library are
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804812 (2011) https://doi.org/10.1117/12.885963
The Landsat Data Continuity Mission (LDCM), a partnership between the National Aeronautics and Space Administration
(NASA) and the Department of Interior (DOI) / United States Geological Survey (USGS), is scheduled for launch in
December, 2012. It will be the eighth mission in the Landsat series. The LDCM instrument payload will consist of the
Operational Land Imager (OLI), provided by Ball Aerospace and Technology Corporation (BATC) under contract to NASA
and the Thermal Infrared Sensor (TIRS), provided by NASA's Goddard Space Flight Center (GSFC). This paper outlines the
present development status of the two instruments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804813 (2011) https://doi.org/10.1117/12.886473
The Landsat Data Continuity Mission consists of a two-sensor platform with the Operational Land Imager and Thermal
Infrared Sensor (TIRS). Much of the success of the Landsat program is the emphasis placed on knowledge of the
calibration of the sensors relying on a combination of laboratory, onboard, and vicarious calibration methods. Rigorous
attention to NIST-traceability of the radiometric calibration, knowledge of out-of-band spectral response, and
characterizing and minimizing stray light should provide sensors that meet the quality of Landsat heritage. Described
here are the methods and facilities planned for the calibration of TIRS which is a pushbroom sensor with two spectral
bands (10.8 and 12 micrometer) and the spatial resolution 100 m with 185-km swath width. Testing takes place in a
vacuum test chamber at NASA GSFC using a recently-developed calibration system based on a 16-aperture black body
source to simulate spatial and radiometric sources. A two-axis steering mirror moves the source across the TIRS field
while filling the aperture. A flood source fills the full field without requiring movement of beam providing a means to
evaluate detector-to-detector response effects. Spectral response of the sensor will be determined using a monochromator
source coupled to the calibration system. Knowledge of the source output will be through NIST-traceable thermometers
integrated to the blackbody. The description of the calibration system, calibration methodology, and the error budget for
the calibration system shows that the required 2% radiometric accuracy for scene temperatures between 260 and 330 K
is well within the capabilities of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804814 (2011) https://doi.org/10.1117/12.885540
The Landsat Data Continuity Mission (LDCM) focuses on a next generation global coverage, imaging system to
replace the aging Landsat 5 and Landsat 7 systems. The major difference in the new system is the migration
from the multi-spectral whiskbroom design employed by the previous generation of sensors to modular focal
plane, multi-spectral pushbroom architecture. Further complicating the design shift is that the reflective and
thermal acquisition capability is split across two instruments spatially separated on the satellite bus. One of the
focuses of the science and engineering teams prior to launch is the ability to provide seamless data continuity
with the historic Landsat data archive. Specifically, the challenges of registering and calibrating data from the
new system so that long-term science studies are minimally impacted by the change in the system design. In
order to provide the science and engineering teams with simulated pre-launch data, an effort was undertaken to
create a robust end-to-end model of the LDCM system. The modeling environment is intended to be flexible
and incorporate measured data from the actual system components as they were completed and integrated.
The output of the modeling environment needs to include not only radiometrically robust imagery, but also
the meta-data necessary to exercise the processing pipeline. This paper describes how the Digital Imaging
and Remote Sensing Image Generation (DIRSIG) model has been utilized to model space-based, multi-spectral
imaging (MSI) systems in support of systems engineering trade studies. A mechanism to incorporate measured
focal plane projections through the forward optics is described. A hierarchal description of the satellite system
is presented including the details of how a multiple instrument platform is described and modeled, including
the hierarchical management of temporally correlated jitter that allows engineers to explore impacts of different
jitter sources on instrument-to-instrument and band-to-band registration. The capabilities of a new, non-imaging
instrument to simulate the measurement of platform ephemeris is also introduced. Finally, the geometric and
radiometric foundations for modeling clouds in the DIRSIG model will be described and demonstrated as one of
the more significant challenges in registering multi-spectral pushbroom sensor data products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804815 (2011) https://doi.org/10.1117/12.885561
The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are two new sensors being developed
by the Landsat Data Continuity Mission (LDCM) that will extend over 35 years of archived Landsat data. In
a departure from the whiskbroom design used by all previous generations of Landsat, the LDCM system will
employ a pushbroom technology. Although the newly adopted modular array, pushbroom architecture has several
advantages over the previous whiskbroom design, registration of the multi-spectral data products is a concern.
In this paper, the Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to simulate
an LDCM collection, which gives the team access to data that would not otherwise be available prior to launch.
The DIRSIG model was used to simulate the two-instrument LDCM payload in order to study the geometric
and radiometric impacts of the sensor design on the proposed processing chain. The Lake Tahoe area located
in eastern California was chosen for this work because of its dramatic change in elevation, which was ideal for
studying the geometric effects of the new Landsat sensor design. Multi-modal datasets were used to create the
Lake Tahoe site model for use in DIRSIG. National Elevation Dataset (NED) data were used to create the digital
elevation map (DEM) required by DIRSIG, QuickBird data were used to identify different material classes in the
scene, and ASTER and Hyperion spectral data were used to assign radiometric properties to those classes. In
order to model a realistic Landsat orbit in these simulations, orbital parameters were obtained from a Landsat 7
two-line element set and propagated with the SGP4 orbital position model. Line-of-sight vectors defining how
the individual detector elements of the OLI and TIRS instruments project through the optics were measured and
provided by NASA. Additionally, the relative spectral response functions for the 9 bands of OLI and the 2 bands
of TIRS were measured and provided by NASA. The instruments were offset on the virtual satellite and data
recorders used to generate ephemeris data for downstream processing. Finally, potential platform jitter spectra
were measured and provided by NASA and incorporated into the simulations. Simulated imagery generated by
the model was incrementally provided to the rest of the LDCM team in a spiral development cycle to constantly
refine the simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804816 (2011) https://doi.org/10.1117/12.889265
The Thermal Infrared Sensor (TIRS) on board the Landsat Data Continuity Mission (LDCM) is a two-channel,
push-broom imager that will continue Landsat thermal band measurements of the Earth. The core of the instrument
consists of three Quantum Well Infrared Photodetector (QWIP) arrays whose data are combined to
effectively produce a linear array of 1850 pixels for each band with a spatial resolution of approximately 100
meters and a swath width of 185 kilometers. In this push-broom configuration, each pixel may have a slightly
different band shape. An on-board blackbody calibrator is used to correct each pixel. However, depending
on the scene being observed, striping and other artifacts may still be present in the final data product. The
science-focused mission of LDCM requires that these residual effects be understood.
The analysis presented here assisted in the selection of the three flight QWIP arrays. Each pixel was scrutinized
in terms of its compliance with TIRS spectral requirements. This investigation utilized laboratory spectral measurements
of the arrays and filters along with radiometric modeling of the TIRS instrument and environment.
These models included standard radiometry equations along with complex physics-based models such as the
MODerate spectral resolution TRANsmittance (MODTRAN) and Digital Imaging and Remote Sensing Image
Generation (DIRSIG) tools. The laboratory measurements and physics models were used to determine the extent
of striping and other spectral artifacts that might be present in the final TIRS data product. The results
demonstrate that artifacts caused by the residual pixel-to-pixel spectral non-uniformity are small enough that
the data can be expected to meet the TIRS radiometric and image quality requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804817 (2011) https://doi.org/10.1117/12.881777
This work describes numerical methods for the joint reconstruction and segmentation of spectral images
taken by compressive sensing coded aperture snapshot spectral imagers (CASSI). In a snapshot, a CASSI
captures a two-dimensional (2D) array of measurements that is an encoded representation of both spectral
information and 2D spatial information of a scene, resulting in significant savings in acquisition time and data
storage. The double disperser coded aperture snapshot imager (DD-CASSI) is able to capture a hyperspectral
image from which a highly underdetermined inverse problem is solved for the original hyperspectral cube
with regularization terms such as total variation minimization. The reconstruction process decodes the
2D measurements to render a three-dimensional spatio-spectral estimate of the scene, and is therefore an
indispensable component of the spectral imager. In this study, we seek a particular form of the compressed
sensing solution that assumes spectrally homogeneous segments in the two spatial dimensions, and greatly
reduces the number of unknowns. The proposed method generalizes popular active contour segmentation
algorithms such as the Chan-Vese model and also enables one to jointly estimate both the segmentation
membership functions and the spectral signatures of each segment. The results are illustrated on a simulated
Hubble Space Satellite hyperspectral dataset, a real urban hyperspectral dataset, and a real DD-CASSI image
in microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804818 (2011) https://doi.org/10.1117/12.881648
While able to measure the red, green, and blue channels, color imagers are not true spectral imagers
capable of spectral measurements. In a previous paper, it was demonstrated that an estimate of a
low resolution visible spectra of a naturally illuminated outdoor scene can be estimated from RGB
values measured by a conventional color imager. In this paper we present a refined algorithm and
document results in a study to estimate visible source spectra from solar illumination scenes using
reflectance spectra generated from the USGS data base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 804819 (2011) https://doi.org/10.1117/12.883439
Currently, the MODIS instrument on the Aqua satellite has a number of broken detectors resulting in unreliable
data for 1.6 micron band (band 6) measurements. Damaged detectors, transmission errors, and electrical failure
are all vexing but seemingly unavoidable problems leading to line drop and data loss. Standard interpolation can
often provide an acceptable solution if the loss is sparse. Interpolation, however, introduces a-priori assumptions
about the smoothness of the data. When the loss is significant, as it is on MODIS/Aqua, interpolation creates
statistically or physically implausible image values and visible artifacts.
We have previously developed an algorithm to recreate the missing band 6 data from reliable data in the
other 500m bands using a quantitative restoration. Our algorithm uses values in a spectral/spatial neighborhood
of the pixel to be estimated, and proposes a value based on training data from the uncorrupted pixels. In this
paper, we will present extensions of that algorithm that both improve the performance and robustness of the
algorithm. We compare with prior work that just restores band 6 from band 7, and present statistical evidence
that data from bands 3, 4, and 5 are also pertinent. We will demonstrate that the increased accuracy from our
multi-band statistical estimate has significant consequences at the product level. As an example we show that
the restored band 6 has potential benefit to the NASA snow mask for MODIS/Aqua when compared with using
band 7 as a replacement for the damaged band 6.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481A (2011) https://doi.org/10.1117/12.884020
The Advanced Baseline Imager (ABI) on GOES-R will help NOAA's objective of engaging and educating the
public on environmental issues by providing near real-time imagery of the earth-atmosphere system. True color
satellite images are beneficial to the public, as well as to scientists, who use these images as an important
"decision aid" and visualization tool. Unfortunately, ABI only has two visible bands (cyan and red) and does
not directly produce the three bands (blue, green, and red) used to create true color imagery.
We have developed an algorithm that will produce quantitative true color imagery from ABI. Our algorithm
estimates the three tristimulus values of the international standard CIE 1931 XYZ colorspace for each pixel of the
ABI image, and thus is compatible with a wide range of software packages and hardware devices. Our algorithm
is based on a non-linear statistical regression framework that incorporate both classification and local multispectral
regression using training data. We have used training data from the hyper-spectral imager Hyperion.
Our algorithm to produce true color images from the ABI is not specific to ABI and may be applicable to other
satellites which, like the ABI, do not have the ability to directly produce RGB imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ezz Eldin F. Abdelkawy, Tarek A. Mahmoud, Wesam M. Hussein
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481B (2011) https://doi.org/10.1117/12.883769
Hyperspectral imaging becomes an important technique that increases the valuable information enclosed within the
image. Spectral cube produced by this type of imaging introduces a new material signature known as "spectral
signature". This signature is unique for each material as it depends on the molecular composition of the material surface.
To produce the spectral cube, a spectrometer should be used in the imagery device to split the electromagnetic energy at
different wavelengths before its projection on the imaging array. This spectrometer may be a dispersive element, such as
prism and grating, or an electronically tuneable filter. Some of dispersive spectrometers, such as Fourier transform
interferometer (FTIR) and image multi-spectral imaging (IMSS), are based on sliding the lenses, or mirrors, along the
optical axis which may result in a slightly out-of-focus blurring. Blind deconvolution techniques have been successfully
used to decrease this blurring but at the expense of edge sharpening which may be a problem in some applications such
as target detection and recognition.
In this paper, we introduce a new method to deblurr the hyperspectral images keeping edges as sharp as possible.
This is done by firstly detecting the edges locations and then applying a class of morphological filtering. Motivated by
the success of threshold decomposition, gradient-based operators are used to detect the locations of these edges followed
by an adaptive morphological filter to sharpen these detected edges. Experimental results demonstrate that the
performance of the proposed deblurring filter is superior to that of the blind deconvolution methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481C (2011) https://doi.org/10.1117/12.883383
In this paper, sparse kernel-based ensemble learning for hyperspectral anomaly detection is proposed. The
proposed technique is aimed to optimize an ensemble of kernel-based one class classifiers, such as Support Vector
Data Description (SVDD) classifiers, by estimating optimal sparse weights. In this method, hyperspectral
signatures are first randomly sub-sampled into a large number of spectral feature subspaces. An enclosing
hypersphere that defines the support of spectral data, corresponding to the normalcy/background data, in the
Reproducing Kernel Hilbert Space (RKHS) of each respective feature subspace is then estimated using regular
SVDD. The enclosing hypersphere basically represents the spectral characteristics of the background data in the
respective feature subspace. The joint hypersphere is learned by optimally combining the hyperspheres from the
individual RKHS, while imposing the l1 constraint on the combining weights. The joint hypersphere representing
the most optimal compact support of the local hyperspectral data in the joint feature subspaces is then used
to test each pixel in hyperspectral image data to determine if it belongs to the local background data or not.
The outliers are considered to be targets. The performance comparison between the proposed technique and the
regular SVDD is provided using the HYDICE hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481D (2011) https://doi.org/10.1117/12.883371
Hyperspectral pixels are acquired in hundreds of narrow and continuous spectral bands, and the hyperspectral data cubes
typically contain hundreds of megabytes. Analysis and processing of the high-dimensional hyperspectral data are computationally
expensive and memory inefficient. However, there is a large amount of redundancy between neighboring spectral
bands and the hyperspectral pixels lie in a much lower dimensional subspace. Therefore, numerous techniques can be
applied to reduce the dimensionality while maintaining the structure of the data. This would lead to a significant reduction
of the complexity of the imaging system, as well as an improvement of the computational efficiency of the detection
algorithms. In this paper, we explore the use of several dimensionality reduction techniques that can be easily integrated
into the imaging sensors. We also investigate their effect on the performance of classical target detection techniques for
hyperspectral images, including spectral matched filters (SMF), matched subspace detectors (MSD), support vector machines
(SVM), and RX anomaly detection algorithm. Specifically, each N-dimensional spectral pixel is embedded to an
M-dimensional measurement space with M « N by a linear transformation (e.g., random measurement matrices, uniform
downsampling, PCA). The SMF, MSD, SVM, and RX detectors are then applied to the M-dimensional measurement
vectors to detect the targets of interests and their detection performances are compared to those obtained from the entire
N-dimensional spectrum by the receiver operating characteristics curves. Through extensive experiments on several HSI
datasets, we demonstrate that only 1/5
to 1/3
measurements (i.e., the compression ratio M/N
is 1/5
~ 1/3
) are necessary to achieve
detection performance comparable to that obtained by exploiting the full N-dimensional pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481E (2011) https://doi.org/10.1117/12.883400
The detection of gaseous chemical plumes in long-wave infrared hyperspectral images is often accomplished
with algorithms derived from linear radiance models, such as the matched filter. While such algorithms can be
highly effective, deviations of the physical radiative transfer process from the idealized linear model can reduce
performance. In particular, the steering vector employed in the matched filter will never exactly match the
observed plume signature, the estimated background covariance matrix will often suffer some contamination
by the plume signature, and the plume and background will typically be spatially correlated to some extent.
In combination, these effects can be worse than they are individually. In this paper, we systematically vary
these factors to study their impact on detection using a data set of synthetic plumes embedded into measured
background data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481F (2011) https://doi.org/10.1117/12.884360
The central parameter in the quantification of chemical vapor plumes via remote sensing is the mean concentrationpath
length (CL) product, which can lead to estimates of the absolute gas quantity present. The goal of this
paper is to derive Cramer-Rao lower bounds on the variance of an unbiased estimator of CL in concert with other
parameters of a general non-linear radiance model. These bounds offer a guide to feasibility of CL estimation
that is not dependent on any given algorithm. In addition, the derivation of the bounds yields great insight into
the physical and phenomenological mechanisms that control plume quantification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481G (2011) https://doi.org/10.1117/12.884585
This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand
hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance
parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also
allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a
test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few
lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing
single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the
overall apparent canopy reflectance in the near infrared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481H (2011) https://doi.org/10.1117/12.885564
The utility of a hyperspectral image for target detection can be measured by synthetically implanting target
spectra in the image and applying detection algorithms.1 In this paper we apply this method, called the target
implant method, for the purpose of determining the top performing algorithms for a given image and given
target and for determining the relative difficulty for detection of targets in a given image with a given detector.
Our tests include variations on the matched filter, adaptive coherence/cosine estimator and constrained energy
minimization detection algorithms. This enables one to predict the fill fraction at which a given target can be
detected and the best detection algorithm in a given image under ideal circumstances. Comparison of predictions
from this method to detection performance on real target pixels shows that the target implant method does
provide accurate relative predictions in terms of both target difficulty and detector performance, but reliably
predicting the actual number of false alarms for a given target at a given fill fraction is difficult or impossible.
In our tests we used images from the Cooke City Collection2,3 and from the Forest Radiance Collection.4 The
Cooke City Collection was taken with the HyMap sensor on July 4, 2006. This imagery has 126 bands ranging
from 453.8 to 2496.3 nm at a ground sample distance of approximately 3 meters. Seven flightlines were collected,
six of which contain 4 fabric target panels and 3 vehicles with known spectra. The Forest Radiance imagery
had 210 spectral bands (145 good bands) ranging from 397.4nm to 2496.5 with a ground sample distance of
approximately 1.9 meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481I (2011) https://doi.org/10.1117/12.884649
Data dimensionality (DR) is generally performed by first fixing size of DR at a certain number, say p and then finding a
technique to reduce an original data space to a low dimensional data space with dimensionality specified by p. This
paper introduces a new concept of dynamic dimensionality reduction (DDR) which considers the parameter p as a
variable by varying the value of p to make p adaptive compared to the commonly used DR, referred to as static
dimensionality reduction (SDR) with the parameter p fixed at a constant value. In order to materialize the DDR another
new concept, referred to as progressive DR (PDR) is also developed so that the DR can be performed progressively to
adapt the variable size of data dimensionality determined by varying the value of p. The advantages of the DDR over
SDR are demonstrated through experiments conducted for hyperspectral image classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481J (2011) https://doi.org/10.1117/12.881642
Historically, much of spectral image analysis revolves around assumptions of multivariate normality. If the background
spectral distribution can be assumed to be multivariate normal, then algorithms for anomaly detection,
target detection, and classification can be developed around that assumption. However, as the current generation
sensors typically have higher spatial and/or spectral resolution, the spectral distribution complexity of the data
collected is increasing and these assumptions are no longer adequate, particularly image-wide. However, large
portions of the imagery may be accurately described by a multivariate normal distribution. A new empirical
method for assessing the multivariate normality of a hyperspectral distribution is presented here. This method
assesses the multivariate normality of individual spectral image tiles and is applied to the large area search problem.
Additionally, the methodology is applied to a selection of full hyperspectral data sets for general content
evaluation. This information can be used to indicate the degree of multivariate normality (or complexity) of the
data or data regions and to determine the appropriate algorithm to use globally or locally for spatially adaptive
processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481K (2011) https://doi.org/10.1117/12.886927
Visualization of the high-dimensional data set that makes up hyperspectral images necessitates a dimensionality
reduction approach to make that data useful to a human analyst. The expression of spectral data as color images,
individual pixel spectra plots, principal component images, and 2D/3D scatter plots of a subset of the data are
a few examples of common techniques. However, these approaches leave the user with little ability to intuit
knowledge of the full N-dimensional spectral data space or to directly or easily interact with that data. In this
work, we look at developing an interactive, intuitive visualization and analysis tool based on using a Poincaré
disk as a window into that high dimensional space. The Poincaré disk represents an infinite, two-dimensional
hyperbolic space such that distances and areas increase exponentially as you move farther from the center of the
disk. By projecting N-dimensional data into this space using a non-linear, yet relative distance metric preserving
projection (such as the Sammon projection), we can simultaneously view the entire data set while maintaining
natural clustering and spacing. The disk also provides a means to interact with the data; the user is presented
with a "fish-eye" view of the space which can be navigated and manipulated with a mouse to "zoom" into clusters
of data and to select spectral data points. By coupling this interaction with a synchronous view of the data as
a spatial RGB image and the ability to examine individual pixel spectra, the user has full control over the data
set for classification, analysis, and instructive use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481L (2011) https://doi.org/10.1117/12.884178
Simulated imagery has been and will continue to be a great resource to the remote sensing community. It not only fills
in the gaps when real imagery is not available, but allows the user to know and control every aspect of the scene. Over
the last 20 years we have seen its value in algorithm development, systems level design trade studies and
phenomenology investigation. The realism of this data is often linked to its radiometric accuracy. The Rochester
Institute of Technology's Digital Imaging and Remote Sensing (DIRS) Laboratory has done extensive work on making
simulations more realistic for years, while developing our in house image generator, DIRSIG. In the past we have
invested hundreds of man-hours to painstakingly build large scale scenes of real locations with manual methods.
Recently, new procedural tools and open source geometry repositories have allowed the creation of similar scenes with
improved scene clutter in significantly less time. It is now possible to assemble and build large city-scale scene
geometries with a more automated workflow over the course of a few hours. Even with these advances, an observer
viewing these high resolution, complex, spectrally and spatially textured simulated images is still visually aware that
they are nothing but simulations, albeit radiometrically and spectrally accurate. This paper will investigate the above
concern regarding simulated imagery by looking at the utility, evolution and future of image simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481M (2011) https://doi.org/10.1117/12.883932
Spectral pixels in a hyperspectral image are known to lie in a low-dimensional subspace. The Linear Mixture Model
states that every spectral vector is closely represented by a linear combination of some signatures. When no prior
knowledge of the representing signatures available, they must be extracted from the image data, then the abundances of
each vector can be determined. The whole process is often referred to as unsupervised endmember extraction and
unmixing.
The Linear Mixture Model can be extended to Sparse Mixture Model R=MS + N, where not only single pixels but the
whole hyperspectral image has a sparse representation using a dictionary M made of the data itself, and the abundance
vectors (columns of S) are sparse at the same locations. The endmember extraction and unmixing tasks then can be done
concurrently by solving for a row-sparse abundance matrix S. In this paper, we pose a convex optimization problem,
then using simultaneous sparse recovery techniques to find S. This approach promise a global optimum solution for the
process, rather than suboptimal solutions of iterative methods which extract endmembers one at a time. We use l1l2 norm
of S to promote row-sparsity in simultaneous sparse recovery, then impose additional hyperspectral constraints to
abundance vectors (such as non-negativity and sum-to-one).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481N (2011) https://doi.org/10.1117/12.883374
In this paper, we propose a joint sparsity model for target detection in hyperspectral imagery. The key innovative idea here
is that hyperspectral pixels within a small neighborhood in the test image can be simultaneously represented by a linear
combination of a few common training samples, but weighted with a different set of coefficients for each pixel. The joint
sparsity model automatically incorporates the inter-pixel correlation within the hyperspectral imagery by assuming that
neighboring spectral pixels usually consists of similar materials. The sparse representations of the neighboring pixels are
obtained by simultaneously decomposing the pixels over a given dictionary consisting of training samples of both the target
and background classes. The recovered sparse coefficient vectors are then directly used for determining the label of the test
pixels. Simulation results on several real hyperspectral images show that the proposed algorithm based on the joint sparsity
model outperforms the classical hyperspectral target detection algorithms, such as the popular spectral matched filters,
matched subspace detectors, adaptive subspace detectors, as well as binary classifiers such as support vector machines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481O (2011) https://doi.org/10.1117/12.884190
Linear spectral unmixing and endmember selection are two of the many tasks that can be accomplished using hyperspectral
imagery. The quality of the unmixing results depends on an accurate estimate of the number of endmembers used in
the analysis. Too many estimated endmembers produce over fitting of the spectral unmixing results; too few estimated
endmembers produce spectral unmixing results with large residual errors. Several statistical and geometrical approaches
have been developed to estimate the number of endmembers, but many of these approaches rely on using the global
dataset. The global approach does not take into consideration local endmember variability, which is of particular interest
in high-spatial resolution imagery. Here, the number of endmembers within local image tiles is estimated by using a novel,
spatially adaptive approach. Each pixel is unmixed using the locally identified endmembers and global abundance maps
are generated by clustering these locally derived endmembers. Comparisons are made between this new approach and
an established global method that uses PCA to estimate the number of endmembers and SMACC to identify the spectra.
Multiple images with varying spatial resolution are used in the comparison of methodologies and conclusions are drawn
based on per-pixel residual unmixing errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481P (2011) https://doi.org/10.1117/12.884650
Linear Spectral Mixture Analysis (LSMA) is a theory developed to perform spectral unmixing where three major LSMA
techniques, Least Squares Orthogonal Subspace Projection (LSOSP), Non-negativity Constrained Least Squares (NCLS)
and Fully Constrained Least Squares (FCLS) for this purpose. Later on these three techniques were further extended to
Fisher's LSMA (FLSMA), Weighted Abundance Constrained-LSMA (WAC-LSMA) and kernel-based LSMA
(KLSMA). This paper combines both approaches of KLSMA and WACLSMA to derive a most general version of
LSMA, Kernel-based WACLSMA (KWAC-LSMA) which includes all the above-mentioned LSMAs as its special
cases. The utility of the KWAC-LSMA is further demonstrated by multispectral and hyperspectral experiments for
performance analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481Q (2011) https://doi.org/10.1117/12.884098
Hyperspectral sensors are delivering a data cube consisting of hundreds of images gathered in adjacent frequency
bands. Processing such data requires solutions to handle the computational complexity and the information
redundancy. In principle, there are two different approaches deployable. Data compression merges this imagery to
some few images. Hereby only the essential information is preserved. Small variations are treated as disturbances
and hence removed. Band selection eliminates superfluous bands, leaving the others unmodified. Thus even minor
deviations are preserved.
In our paper, we present a novel band selection method especially developed for surveillance purposes. Hereby,
the capability to detect even small variations poses an essential requirement, only fulfilled by the second approach.
The computational complexity and the performance of such an algorithm depend on the available information.
If complete knowledge about the targets and the background is available, contrast maximization establishes
a perfect band selection. Without any knowledge the selection has to be performed by exploiting the band
attributes often resulting in a poor choice. In order to avoid this, the developed algorithm incorporates the
accessible information from the monitoring scene. In particular, features (e.g. anomalies) based on proximity
relations are extracted in each band. Subsequently, an assessment of their suitability is accomplished by means
of the value margins and the associated distributions. The final selection is then based on the inspection
of the variations caused by the illumination and other external effects. We demonstrate and evaluate the
appropriateness of this new method with a practical example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481R (2011) https://doi.org/10.1117/12.884359
In this paper, we propose a denoising method for hyperspectral images using a joint bilateral filter. The joint bilateral
filter with the fused image of hyperspectral image bands is applied on the noisy image bands. This fused image is a
single grayscale image that is obtained by the weighted summation of hyperspectral image bands. It retains the features
and details of each hyperspectral image band. Therefore the joint bilateral filter with the fused image is powerful in
reducing noise while preserving the characteristics of the individual spectral bands. We evaluated the performance of the
proposed noise reduction method on hyperspectral imaging systems, which we developed for visible and near-infrared
spectral regions. Experimental results show that the proposed method outperforms the conventional approaches, such as
the basic bilateral filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cheng-Chun Chang, Nan-Ting Lin, Umpei Kurokawa, Byung Il Choi
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481S (2011) https://doi.org/10.1117/12.886342
In recent years, miniature spectrometers have been found useful in many applications to resolve spectrum signature of
objects or materials. In this paper, algorithms for filter-array spectrum sensor to realize miniature spectrometers are
investigated. Conventionally, the filter-array spectrum sensor can be modeled as an over-determined problem, and the
spectrum can be reconstructed by solving a set of linear equations. On the contrary, we model the spectrum
reconstruction process as an under-determined problem, and bring up the concept of template-selection by sparse
representation. L1-minimization algorithm is tested to achieve a high reconstruction resolution. Simulation results
show superior quality of spectrum reconstruction can be made possible from this under-determined approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, 80481T (2011) https://doi.org/10.1117/12.887426
Hyperspectral data due to its higher information content afforded by higher spectral resolution is
increasingly being used for various remote sensing applications including information extraction at
subpixel level. There is however usually a lack of matching fine spatial resolution data particularly
for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial
resolutions due to considerations of type of application, its cost and other associated analytical and
computational complexities. Typically whenever an object, either manmade, natural or any ground
cover class (called target, endmembers, components or class) gets spectrally resolved but not
spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate
substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel
target detection problems. Various spectral unmixing models such as Linear Mixture Modeling
(LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the
endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does
not provide spatial distribution of these abundance fractions within a pixel. This limits the
applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse
Euclidean distance based super-resolution mapping method has been presented that achieves
subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance
fraction within a pixel. Results obtained at different resolutions indicate that super-resolution
mapping may effectively aid subpixel target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.