Identification of vegetation species and type is important in many chemical, biological, radiological, nuclear, and explosive sensing applications. For instance, emergence of non-climax species in an area may be indicative of anthropogenic activity which can complement prompt signatures for underground nuclear explosion detection and localization. To explore signatures of underground nuclear explosions, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of 274 visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. Previous work has shown that a vegetation spectral derivative can be more indicative of species than the measured value of each band. However, applying a spectral derivative amplifies any noise in the spectrum and reduces the benefit of the derivative analysis. Fitting the spectra with a polynomial can provide the slope information (derivative) without amplifying noise. In this work, we simultaneously capture slope and curvature information and reduce the dimensionality of remotely sensed hyperspectral imaging data. This is performed by employing a 2nd order polynomial fit across spectral bands of interest. We then compare the classification accuracy of a support vector machine classifier fit to the polynomial dimensionality reduction technique and the same support vector machine fit to the same number of components from principle component analysis.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
Fully-polarimetric X-band (9.6 GHz center frequency) VideoSAR with 0.125-meter ground resolution flew collections before, during, and after the fifth Source Physics Experiment (SPE-5) underground chemical explosion. We generate and exploit synthetic aperture RADAR (SAR) and VideoSAR products to characterize surface effects caused by the underground explosion. To our knowledge, this has never been done. Exploited VideoSAR products are “movies” of coherence maps, phase-difference maps, and magnitude imagery. These movies show two-dimensional, time-varying surface movement. However, objects located on the SPE pad created unwanted, vibrating signatures during the event which made registration and coherent processing more difficult. Nevertheless, there is evidence that dynamic changes are captured by VideoSAR during the event. VideoSAR provides a unique, coherent, time-varying measure of surface expression of an underground chemical explosion.
We have employed the Arecibo Observatory Planetary Radar (AO) transmitter and the Mini-RF radar onboard NASA's
Lunar Reconnaissance Orbiter (LRO) as a receiver to collect bistatic data of the lunar surface. In this paper, we
demonstrate the ability to form bistatic polarimetric imagery with spatial resolution on the order of 50m, and to create
polarimetric maps that could potentially reveal the presence of ice in lunar permanently shadowed craters. We discuss
the details of the signal processing techniques that are required to allow these products to be formed.
Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to
backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages
of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting
algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image
formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto
a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of
beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the
beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate
this issue. Results of image reconstructions from phase history data are presented.
In this paper we describe an algorithm for fast spotlight-mode synthetic aperture radar (SAR) image formation that
employs backprojection as the core, but is implemented such that its compute time is comparable to the often-used Polar
Format Algorithm (PFA). (Standard backprojection is so much slower than PFA that it is impractical to use in many
operational scenarios.) We demonstrate the feasibility of the algorithm on real SAR phase history data sets and show
some advantages in the SAR image formed by this technique.
In this paper we show that the technique for spotlight-mode SAR image formation generally known as "backprojection"
or "time-domain" is most easily derived and described in terms of the well-known methods of phased-array
beamforming. By contrast, backprojection has been typically developed via analogy to tomographic imaging, which
restricts this technique to the case of planar wavefronts. We demonstrate how the very simple notion of delay-and-sum
beamforming leads directly to the backprojection algorithm for SAR, including the case for curved wavefronts. We
further explain why backprojection offers a certain elegant simplicity for SAR imaging, and allows direct one-step
computation of several useful SAR products, including an orthographically correct image free of any geometric or
defocus effects from wavefront curvature and also free of the effects of terrain-elevation-induced defocus. (This product
requires as an input a pre-existing digital elevation map (DEM) of the scene to be imaged.) In addition, we'll
demonstrate why beamforming yields a mode-independent SAR image formation algorithm, i.e. one that can just as
easily accommodate strip-map or spotlight-mode phase histories collected on an arbitrary flight path.
This paper compares three algorithms for potential use in a real-time, on-board implementation of spotlight-mode SAR image formation. These include: the polar formatting algorithm (PFA), the range migration algorithm (RMA), and the overlapped subapertures algorithm (OSA). We conclude that for any reasonable spotlight-mode imaging scenario, PFA is easily the algorithm of choice because its computational efficiency is significantly higher than that of either RMA or OSA. This comparison specifically includes cases in which wavefront curvature is sufficient to cause image defocus in conventional PFA, because a post-processing refocus step can be performed with PFA to yield excellent image quality for only a minimal increase in computation time. We demonstrate that real-time image formation for many imaging scenarios is achievable using PFA implemented on a single Pentium M processor. OSA is quite slow compared to PFA, especially for the case of moderate to high resolution (9 inches and better). RMA is not competitive with PFA for situations that do not require wavefront curvature correction.
For those cases in which PFA requires post-processing to correct for wavefront curvature, RMA comes closer in efficiency to PFA, but is still outperformed by the modified PFA.
Coherent stereo pairs from cross-track synthetic aperture radar (SAR) collects allow fully automated correlation matching using magnitude and phase data. Yet, automated feature matching (correspondence)
becomes more difficult when imaging rugged terrain utilizing large stereo crossing angle geometries because high-relief features can undergo significant spatial distortions. These distortions sometimes cause traditional, shift-only correlation matching to fail. This paper presents a possible solution addressing this difficulty. Changing the complex correlation maximization search from shift-only to shift-and-scaling using the downhill simplex method results in
higher correlation. This is shown on eight coherent spotlight-mode cross-track stereo pairs with stereo crossing angles averaging 93.7°. collected over terrain with slopes greater than 20°. The resulting digital elevation maps (DEMs) are compared to ground truth. Using the shift-scaling correlation approach to calculate disparity, height errors decrease and the number of reliable DEM posts increase.
Automating the detection and identification of significant threats using multispectral (MS) imagery is a critical issue in remote sensing. Unlike previous multispectral target recognition approaches, we utilize a three-stage process that not only takes into account the spectral content, but also the spatial information. The first stage applies a matched filter to the calibrated MS data. Here, the matched filter is tuned to the spectral components of a given target and produces an image intensity map of where the best matches occur. The second stage represents a novel detection algorithm, known as the focus of attention (FOA) stage. The FOA performs an initial screening of the data based on intensity and size checks on the matched filter output. Next, using the target's pure components, the third stage performs constrained linear unmixing on MS pixels within the FOA detected regions. Knowledge sources derived from this process are combined using a sequential probability ratio test (SPRT). The SPRT can fuse contaminated, uncertain and disparate information from multiple sources. We demonstrate our approach on identifying a specific target using actual data collected in ideal conditions and also use approximately 35 square kilometers of urban clutter as false alarm data.
Useful products generated from interferometric synthetic aperture radar (IFSAR) complex data include height measurement, coherent change detection, and classification. The IFSAR coherence is a spatial measure of complex correlation between two collects, a product of IFSAR signal processing. A tacit assumption in such IFSAR signal processing is that the terrain height is constant across an averaging box used in the process of correlating the two images. This paper presents simulations of IFSAR coherence if two target with different heights exist in a given correlation cell, a condition in IFSAR collections produced by layover. It also includes airborne IFSAR data confirming the simulation results. The paper concludes by exploring the implications of the results on IFSAR height measurements and classification.
This paper describes a new method to determine the lower limit of patient exposure: By
placing several imaging plates of a computed radiography system (CR) into the same cassette,
several images of the same patient can be obtained at different exposure levels (determined
by the x-ray transmission of the various imaging plates. Initial experiments indicate that
exposure reduction of between 50 and 75% might be acceptable. CR provides a powerful tool
to study the subject of exposure reduction.
Computed radiography uses a photostimulable phosphor coated imaging plate which is
exposed to x-rays and laser read to form an image. After laser reading, there is a
considerable amount of energy remaining on the imaging plate which is not used. This study
simulated a change in the laser readout process to utilize more of the energy on the image
plate, and potentially improve image quality without changing exposure factors. Images of a
contrast detail phantom were made before and after alteration of the readout process and
analyzed by both physical and psychophysical means. It was found that there is an increase
in the signal-to-noise ratio, when measured with an aperture of the size of a single pixel
(linear dimension about 0.15 mm). However there is no change in the signal-to-noise ratio,
when measured with apertures of the size of 0.75 mm (5 x 5 pixels) and 1.5 mm (10 x 10
pixels). This agrees with the results of the contrast detail study: after alteration, the
observers did not detect smaller objects than they had before the alteration. It appears the
imaging plate readout process is fairly optimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.