In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
Between May 6 and May 18, 2000, the Cerro Grande/Los Alamos wildfire burned approximately 43,000 acres (17,500 ha) and 235 residences in the town of Los Alamos, NM. Initial estimates of forest damage included 17,000 acres (6,900 ha) of 70-100% tree mortality. Restoration efforts following the fire were complicated by the large scale of the fire, and by the presence of extensive natural and man-made hazards. These conditions forced a reliance on remote sensing techniques for mapping and classifying the burn region. During and after the fire, remote-sensing data was acquired from a variety of aircraft-based and satellite-based sensors, including Landsat 7. We now report on the application of a machine learning technique, implemented in a software package called GENIE, to the classification of forest fire burn severity using Landsat 7 ETM+ multispectral imagery. The details of this automatic classification are compared to the manually produced burn classification, which was derived from field observations and manual interpretation of high-resolution aerial color/infrared photography.
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
The “pixel purity index” (PPI) algorithm proposed by Boardman, et al1 identifies potential endmember pixels in multispectral imagery. The algorithm generates a large number of “skewers” (unit vectors in random directions), and then computes the dot product of each skewer with each pixel. The PPI is incremented for those pixels associated with the extreme values of the dot products. A small number of pixels (a subset of those with the largest PPI values) are selected as “pure” and the rest of the pixels in the image are expressed as linear mixtures of these pure endmembers. This provides a convenient and physically-motivated decomposition of the image in terms of a relatively few components. We report on a variant of the PPI algorithm in which blocks of B skewers are considered at a time. Prom the computation of B dot products, one can produce a much larger set of “derived” dot products that are associated with skewers that are linear combinations of the original B skewers. Since the derived dot products involve only scalar operations, instead of full vector dot products, they can be very cheaply computed. We will also discuss a hardware implementation on a field programmable gate array (FPGA) processor both of the original PPI algorithm and of the block-skewer approach. We will furthermore discuss the use of fast PPI as a front-end to more sophisticated algorithms for selecting the actual endmembers.
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
KEYWORDS: Image segmentation, Hyperspectral imaging, Field programmable gate arrays, Distance measurement, Data centers, Image compression, Data modeling, Image processing algorithms and systems, Optical spheres, Monte Carlo methods
Modern hyperspectral imagers can produce data cubes with hundreds of spectral channels and millions of pixels. One way to cope with this massive volume is to organize the data so that pixels with similar spectral content are clustered together in the same category. This provides both a compression of the data and a segmentation of the image that can be useful for other image processing tasks downstream. The classic approach for segmentation of multidimensional data is the k-means algorithm; this is an iterative method that produces successively better segmentations. It is a simple algorithm, but the computational expense can be considerable, particularly for clustering large hyperspectral images into many categories. The ASAPP (Accelerating Segmentation And Pixel Purity) project aims to relieve this processing bottleneck by putting the k-means algorithm into field-programmable gate array (FPGA) hardware. The standard software implementation of k-means uses floating-point arithmetic and Euclidean distances. By fixing the precision of the computation and by employing alternative distance metrics (we consider the “Manhattan” and the “Max” metrics as well as a linear combination of the two), we can fit more distance-computation nodes on the chip, obtain a higher degree of fine-grain parallelism, and therefore faster performance, but at the price of slightly less optimal clusters. We investigate the effects of different distance metrics from both a theoretical (using random simulated data) and an empirical viewpoint (using 224-channel AVIRIS images and 10-channel multispectral images that are derived from the AVIRIS data to simulate MTI data).
We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.
We investigate the effect of truncating the precision of hyperspectral
image data for the purpose of more efficiently segmenting the image
using a variant of k-means clustering. We describe the implementation of
the algorithm on field-programmable gate array (FPGA) hardware.
Truncating the data to only a few bits per pixel in each spectral
channel permits a more compact hardware design, enabling greater
parallelism, and ultimately a more rapid execution. It also enables
the storage of larger images in the onboard memory. In exchange for
faster clustering, however, one trades off the quality of the produced
segmentation. We find, however, that the clustering algorithm can
tolerate considerable data truncation with little degradation in
cluster quality. This robustness to truncated data can be extended by
computing the cluster centers to a few more bits of precision than the
data. Since there are so many more pixels than centers, the more
aggressive data truncation leads to significant gains in the number of
pixels that can be stored in memory and processed in hardware concurrently.
The Pixel Purity Index (PPI) is an algorithm employed in remote sensing for analyzing hyperspectral images. Particularly for low-resolution imagery, a single pixel usually covers several different materials, and its observed spectrum is (to a good approximation) a linear combination of a few pure spectral shapes. The PPI algorithm tries to identify these pure spectra by assigning a pixel purity index to each pixel in the image; the spectra for those pixels with a high index value are candidates for basis elements in the image decomposition. The PPI algorithm is extremely time consuming but is a good candidate for parallel hardware implementation due to its high volume of independent dot-product calculations. This article presents two parallel architectures we have developed and implemented on the Wildforce board. The first one is based on bit-serial arithmetic operators and the second deals with standard operators. Speed-up factors of up to 80 have been measured for these hand-coded architectures. In addition,the second version has been synthesized with the Streams-C compiler. The compiler translates a high level algorithm expressed in a parallel C extension into synthesizable VHDL. This comparison provides an interesting way of estimating the tradeoff between a traditional approach which tailors the design to get optimal performance and a fully automatic approach which aims to generate a correct design in minimal time.
We describe the implementation and performance of a genetic algorithm (GA) which evolves and combines image processing tools for multispectral imagery (MSI) datasets. Existing algorithms for particular features can also be “re-tuned” and combined with the newly evolved image processing tools to rapidly produce customized feature extraction tools. First results from our software system were presented previously. We now report on work extending our system to look for a range of broad-area features in MSI datasets. These features demand an integrated spatio- spectral approach, which our system is designed to use. We describe our chromosomal representation of candidate image processing algorithms, and discuss our set of image operators. Our application has been geospatial feature extraction using publicly available MSI and hyperspectral imagery (HSI). We demonstrate our system on NASA/Jet Propulsion Laboratory’s Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) HSI which has been processed to simulate MSI data from the Department of Energy’s Multispectral Thermal Imager (MTI) instrument. We exhibit some of our evolved algorithms, and discuss their operation and performance.
High-quality, multispectral thermal infrared sensors can, under certain conditions, be used to measure more than one surface temperature in a single pixel. Surface temperature retrieval in general is a difficult task, because even for a single unknown surface, the problem is under-determined. For the example of an N-band sensor, a pixel with two materials at two temperatures will, in principle, have 2(N+1) unknowns (N emissivities and one temperature for each of two materials). In addition, the upwelling path and reflected downwelling radiances must be considered. Split-window (two or more bands) and multi-look (two or more images of the same scene) techniques provide additional information that can be used to reduce the uncertainties in temperature retrieval. Further reduction in the uncertainties is made if the emissivities are known, either a priori (e.g., for water) or by ancillary measurements. Ultimately, if the number of unknowns is reduced sufficiently, the performance of the sensor will determine the achievable temperature sensitivity. This paper will explore the temperature sensitivity for a pixel with two temperatures that can be obtained under various assumptions of sensor performance, atmospheric conditions, number of bands, number of looks, surface emissivity knowledge, and surface composition. Results on synthetic data sets will be presented.
It is not uncommon for remote sensing systems to produce in excess of 100 Mbytes/sec. Los Alamos National Laboratory designed a reconfigurable computer to tackle the signal and image processing challenges of high bandwidth sensors. Reconfigurable computing, based on field programmable gate arrays, offers ten to one hundred times the performance of traditional microprocessors for certain algorithms. This paper discusses the architecture of the computer and the source of performance gains, as well as an example application. The calculation of multiple matched filters applied to multispectral imagery, showing a performance advantage of forty- five over Pentium II (450 MHz), is presented as an exemplar of algorithms appropriate for this technology.
We describe the implementation and performance of a genetic algorithm which generates image feature extraction algorithms for remote sensing applications. We describe our basis set of primitive image operators and present our chromosomal representation of a complete algorithm. Our initial application has been geospatial feature extraction using publicly available multi-spectral aerial-photography data sets. We present the preliminary results of our analysis of the efficiency of the classic genetic operations of crossover and mutation for our application, and discuss our choice of evolutionary control parameters. We exhibit some of our evolved algorithms, and discuss possible avenues for future progress.
The Multispectral Thermal Imager (MTI) has a number of core science retrievals which will be described. We will concentrate on describing the major Level-2 algorithms which cover land, water and atmospheric products. The land products comprise atmospherically corrected surface reflectances, vegetation health status, material identification, land temperature and emissivities. The water related products are: water mask, water quality and water temperature. The atmospheric products are: cloud mask, cirrus mask and atmospheric water vapor. We will present several of these algorithms and present results from simulated MTI data derived from AVIRIS and MODIS Airborne Simulator (MAS). An interactive analysis tool has been created to visually program and test certain Level-2 retrievals.
KEYWORDS: Calibration, Databases, Data processing, Image processing, Data modeling, Data acquisition, Algorithm development, Vegetation, Atmospheric modeling, Data storage
The major science goal for the Multispectral Thermal Imager (MTI) project is to measure surface properties such as vegetation health, temperatures, material composition and others for characterization of industrial facilities and environmental applications. To support this goal, this program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and robust operation. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper will provide an introduction to the data processing and science algorithms for the MTI project. Detailed discussions of the retrieval techniques will follow in papers from the balance of this session.
The retrieval of scene properties (surface temperature, material type, vegetation health, etc.) from remotely sensed data is the ultimate goal of many earth observing satellites. The algorithms that have been developed for these retrievals are informed by physical models of how the raw data were generated. This includes models of radiation as emitted and/or reflected by the scene, propagated through the atmosphere, collected by the optics, detected by the sensor, and digitized by the electronics. To some extent, the retrieval is the inverse of this 'forward' modeling problem. But in contrast to this forward modeling, the practical task of making inferences about the original scene usually requires some ad hoc assumptions, good physical intuition, and a healthy dose of trial and error. The standard MTI data processing pipeline will employ algorithms developed with this traditional approach. But we will discuss some preliminary research on the use of a genetic programming scheme to 'evolve' retrieval algorithms. Such a scheme cannot compete with the physical intuition of a remote sensing scientist, but it may be able to automate some of the trial and error. In this scenario, a training set is used, which consists of multispectral image data and the associated 'ground truth;' that is, a registered map of the desired retrieval quantity. The genetic programming scheme attempts to combine a core set of image processing primitives to produce an IDL (Interactive Data Language) program which estimates this retrieval quantity from the raw data.
Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the- atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. We will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. We describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.
KEYWORDS: Sensors, MODIS, Signal to noise ratio, Mid-IR, Calibration, Temperature metrology, Error analysis, Thermography, Infrared radiation, Long wavelength infrared
High-quality, multispectral thermal infrared sensors can, under certain conditions, be used to measure more than one surface temperature in a single pixel. Surface temperature retrieval in general is a difficult task, because even for a single unknown surface, the problem is under-determined. For the example of an N-band sensor, a pixel with two materials at two temperatures will, in principle, have 2(N + 1) unknowns (N emissivities and one temperature for each of two materials). In addition, the upwelling path and reflected downwelling radiances must be considered. Split-window (two or more bands) and multi-look (two or more images of the same scene) techniques provide additional information that can be used to reduce the uncertainties in temperature retrieval. Further reduction in the uncertainties is made if the emissivities are known, either a priori (e.g., for water) or by ancillary measurements. Ultimately, if the number of unknowns is reduced sufficiently, the performance of the sensor will determine the achievable temperature sensitivity. This paper will explore the temperature sensitivity for a pixel with two temperatures that can be obtained under various assumptions of sensor performance, atmospheric conditions, number of bands, number of looks, surface emissivity knowledge, and surface composition. Results on synthetic data sets will be presented.
With the advent of multi-spectral thermal imagers such as EOS's ASTER high spatial resolution thermal imagery of the Earth's surface will soon be a reality. Previous high resolution sensors such as Landsat 5 had only one spectral channel in the thermal infrared and its utility to determine absolute sea surface temperatures was limited to 6 - 8 K for water warmer than 25 deg C. This inaccuracy resulted from insufficient knowledge of the atmospheric temperature and water vapor, inaccurate sensor calibration, and cooling effects of thin high cirrus clouds. We will present two studies of algorithms and compare their performance. The first algorithm we call `robust' since it retrieves sea surface temperatures accurately over a fairly wide range of atmospheric conditions using linear combinations of nadir and off-nadir brightness temperatures. The second we call `physics-based' because it relies on physics-based models of the atmosphere. It attempts to come up with a unique sea surface temperature which fits one set of atmospheric parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.