PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8051, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 2-D Fractional Fourier Transform (FRFT) has been shown to be applicable to the Synthetic Aperture
Radar (SAR) imaging problem. Streamlined versions presented here makes the 2-D FRFT comparable with and
slightly faster than the Range Doppler (RD) and Extended Chirp Scaling (ECS) methods. The 2-D FRFT is
streamlined by eliminating redundancy due to the fact that the same fractional angle is applied to each pulse in
the SAR phase history's range dimension while one other fractional angle is applied across each range-gate in the
phase history's azimuth dimension. Eliminating the redundancy and approximating the 2-D Fractional Fourier
Transform operation in each dimension produces several streamlined 2-D FRFT methods as well as a very fast
approximate 2-D FRFT. The computational order of the fast approximate 2-D FRFT is less than that of other
corrective SAR imaging techniques. Examples of SAR imaging with these streamlined and approximate FRFTs
are given as well as a comparison of the computational speed and impulse response of the full, streamlined and
approximate 2-D FRFT, and the RD and ECS methods of SAR imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the effect of squint angle on the phase errors introduced by the linear phase assumption in
the polar format algorithm for SAR imaging. The maximum scene radius for an allowable phase error is derived
as a function of squint angle and other parameters. Simulated phase histories for a variety of squint angles are
generated and imaged to demonstrate the bound and the effects encountered when it is exceeded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The polar format algorithm (PFA) is computationally faster than back projection for producing spotlight mode synthetic
aperture radar (SAR). This is very important in applications such as video SAR for persistent surveillance, as images
may need to be produced in real time. PFA's speed is largely due to making a planar wavefront assumption and forming
the image onto a regular grid of pixels lying in a plane. Unfortunately, both assumptions cause loss of focus in airborne
persistent surveillance applications. The planar wavefront assumption causes a loss of focus in the scene for pixels that
are far from scene center. The planar grid of image pixels causes loss of the depth of focus for conic flight geometries.
In this paper, we present a method to compensate for the loss of depth of focus while warping the image onto a terrain
map to produce orthorectified imagery. This technique applies a spatially variant post-filter and resampling to correct
the defocus while dewarping the image. This work builds on spatially variant post-filtering techniques previously
developed at Sandia National Laboratories in that it incorporates corrections for terrain height and circular flight paths.
This approach produces high quality SAR images many times faster than back projection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is not currently known if it is possible to accurately form a synthetic aperture radar image from N data
points in provable near-linear complexity, where accuracy is defined as the ℓ2 error between the full O(N2)
backprojection image and the approximate image. To bridge this gap, we present a backprojection algorithm
with complexity O(log(1/ε)N log N), with ε the tunable pixelwise accuracy. It is based on the butterfly scheme,
which works for vastly more general oscillatory integrals than the discrete Fourier transform. Unlike previous
methods this algorithm allows the user to directly choose the amount of acceptable image error based on a
well-defined metric. Additionally, the algorithm does not invoke the far-field approximation or place restrictions
on the antenna flight path, nor does it impose the frequency-independent beampattern approximation required
by time-domain backprojection techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers a time domain ultrasonic tomographic imaging method in a multi-static configuration using
the propagation and backpropagation (PBP) method. Under this imaging configuration, ultrasonic excitation
signals from the sources probe the object imbedded in the surrounding medium. The scattering signals are
recorded by the receivers. Starting from the nonlinear ultrasonic wave propagation equation and using the
recorded time domain signals from all the receiver sensors, the object is to be reconstructed. The conventional
PBP method is a modified version of the Kaczmarz method that iteratively updates the estimates of the object
acoustical potential distribution within the image area. Each source takes turns to excite the acoustical field
until all the sources are used. The proposed multi-static image reconstruction method utilizes a significantly
reduced number of sources that are simultaneously excited. We consider two imaging scenarios with regard to
source positions. In the first scenario, sources are uniformly positioned on the perimeter of the imaging area.
In the second scenario, sources are randomly positioned. By numerical experiments we demonstrate that the
proposed multi-static tomographic imaging method using the multiple source excitation schemes results in fast
reconstruction and achieves high resolution imaging quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique for aperture weighting for use in video synthetic aperture radar (SAR). In video SAR the
aperture required to achieve the desired cross range resolution typically exceeds the frame rate period. As a result, there
can be a significant overlap in the collected phase history used to form consecutive images in the video. Video SAR
algorithms seek to exploit this overlap to avoid unnecessary duplication of processing. When no aperture weighting or
windowing is used one can simply form oversampled SAR images from the non-overlapping sub-apertures using
coherent back projection (or other similar techniques). The resulting sub-aperture images may be coherently summed to
produce a full resolution image. A simple approach to windowing for sidelobe control is to weight the sub-apertures
during summation of the images. Our approach involves producing two or more weighted images for each sub-aperture
which can be linearly combined to approximate any desired aperture weighting. In this method we achieve nearly the
same sidelobe control as weighting the phase history data and forming a new image for each frame without losing the
computation savings of the sub-aperture image combining approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an inversion scheme for near-field inverse synthetic aperture radar (ISAR) data is derived for both
two and three dimensions from a scalar wave equation model. The proposed data inversion scheme motivates the
use of a filtered back projection (FBP) imaging algorithm. The paper provides a derivation of the the general
imaging filter needed for FBP, which will be shown to reduce to a familiar result for near-field ISAR imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We utilize eight circular passes of measured airborne X-Band radar data to form a novel reflectivity surface
estimate. Our previous work demonstrated reflectivity surface estimation from narrow aperture multi-pass data
and this work extends those results to wide apertures. Narrow aperture surface estimates are co-registered in the
three spatial dimensions and combined non-coherently to form a wide-aperture data product. For the purpose
of visually conveying the scene reflectivity content a surface is formed from the wide angle data product. The
subject of this work is not on the optimality of the methods nor the global convexity of the cost functions.
Instead, these results give us one of the first glimpses at measured wide angle three dimensional SAR image
products and provide a qualitative benchmark against which to measure future wide angle three dimensional
synthetic aperture radar autofocus and imaging algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a monostatic synthetic aperture radar system traversing an arbitrary trajectory on a non-flat
topography. We present a novel edge detection method applicable directly to SAR received signal. Our method
first filters the received data, and then backprojects. The filter is designed to detect the edges of the scene in
different directions at each pixel reconstructed. The method is computationally efficient and may be implemented
with the computational complexity of the fast-backprojection algorithms. We present numerical experiments to
demonstrate the performance of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the question of scattering center detection and estimation performance in synthetic aperture
radar. Specifically, we consider sparse 3D radar apertures, in which the radar collects both azimuth and elevation
diverse data of a scene, but collects only a sparse subset of the traditional filled aperture. We use a sparse
reconstruction algorithm to both detect and estimate scattering center locations and amplitudes in the scene.
We quantify both the detection and estimation performance for scattering centers over a high dynamic range of
magnitudes. Over this wide range of scattering center signal-to-noise values, detection performance is compared
to GLRT detection performance, and estimation performance is compared to the Cramer-Rao lower bound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For large-scale linear inverse problems, a direct matrix-vector multiplication may not be computationally feasible,
rendering many gradient-based iterative algorithms impractical. For applications where data collection can be
modeled by Fourier encoding, the resulting Gram matrix possesses a block Toeplitz structure. This special
structure can be exploited to replace matrix-vector multiplication with FFTs. In this paper, we identify some
of the important applications which can benefit from the block Toeplitz structure of the Gram matrix. Also,
for illustration, we have applied this idea to reconstruct 2D simulated images from undersampled non-Cartesian
Fourier encoding data using three popular optimization routines, namely, FISTA, SpaRSA, and optimization
transfer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider synthetic aperture radar system using ultra-narrowband continuous waveforms, which we refer to
as Doppler Synthetic Aperture Radar (DSAR). We present a novel image formation method for bi-static DSAR.
Our method first correlates the received signal with a scaled or frequency-shifted version of the transmitted signal
over a finite time window, and then uses microlocal analysis to reconstruct the scene by a filtered-backprojection
of the correlated signals. Our approach can be used under non-ideal imaging scenarios such as arbitrary flight
trajectories and non-flat topography. Furthermore, it is an analytic reconstruction technique which can be made
computationally efficient. We present numerical experiments to demonstrate the performance of the proposed
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a single-receive, multiple-transmit channel imaging radar system that limits received data rate while
also providing spatial processing for improved detection of moving targets. A multi-input, single-output (MISO)
system uses orthogonal waveforms to separate spatial channels at the single receiver. The use of orthogonal
waveforms necessitates several modications to both synthetic aperture radar imaging and adaptive space-time
beamforming. An orthogonal frequency-division transmit waveform scheme is proposed, and we derive the
attendant extensions to the standard backprojection and space-time beamforming algorithms.. We demonstrate
imaging and moving target detection results using data from an airborne X-band system. We conclude with a
discussion of the clutter covariance matrix of the resulting space-time beamformer and a suggested waveform
scheduling scheme to minimize the rank of the observed clutter subspace.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents results from a bistatic SAR experiment conducted using two airborne SAR systems operating in the
high VHF- and low UHF-band. The Swedish SAR system LORA operated together with the French SAR system SETHI
and collected data in different bistatic geometries in the frequency band 222-460 MHz and using HH-polarization. The
two SAR systems were synchronized using the 1PPS GPS-signal. Data were collected during four flight missions over
the main test site with forested terrain and buildings as well as controlled target deployments. A fifth mission was
included over a second test site with an extensive data base of forest parameters but without target deployments. The
bistatic radar data have been processed to SAR images and first analysis completed. Results show significant suppression
of strong forest clutter and that the effect increases with bistatic elevation angle. The clutter reduction is observed in
areas with dominating double-bounce scattering. Data analysis shows that forest clutter can be suppressed by 10 dB for a
bistatic elevation angle of 10°.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) often suffers from interference signal from various radio sources. In general,
notch filters or band elimination filters have been utilized to eliminate such interference signal; however, if the
bandwidth of the interference signal is relatively wide, the gap in the spectrum caused by the band elimination
filter could significantly degrade the original image. We propose an algorithm to suppress relatively wide
bandwidth interference while maintaining the image quality. In the algorithm, spatially variant interference
suppression filter is generated based on the signal of interference free band, then the filter is applied to
the interference contaminated image. Unlike the band elimination filter, the spatially variant interference
suppression filter preserves the signal component within the interference contaminated band; therefore, the
image distortion caused by the spectrum gap can be largely eliminated. The algorithm has been tested with
a simulated interference contaminated image generated from the real 10cm resolution airborne Ku band SAR
image and a TerraSAR-X image. It has been shown that while conventional band elimination filter would
degrade the image quality, the image quality of the interference suppressed image obtained by the proposed
algorithm is satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider nonparametric adaptive spectral analysis of complex-valued data sequences with missing samples occurring
in arbitrary patterns. We first present two high-resolution missing-data spectral estimation algorithms:
the Iterative Adaptive Approach (IAA) and the Sparse Learning via Iterative Minimization (SLIM) method.
Both algorithms can significantly improve the spectral estimation performance, including enhanced resolution
and reduced sidelobe levels. Moreover, we consider fast implementations of these algorithms using the Conjugate
Gradient (CG) technique and the Gohberg-Semencul-type (GS) formula. Our proposed implementations fully
exploit the structure of the steering matrices and maximize the usage of the Fast Fourier Transform (FFT),
resulting in much lower computational complexities as well as much reduced memory requirements. The effectiveness
of the adaptive spectral estimation algorithms is demonstrated via several 2-D interrupted synthetic
aperture radar (SAR) imaging examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multichannel Autofocus (MCA) assumes that there exits a region of low return in the focused image and solves
for the correction filter that minimizes the energy in the presumed low-return region. Provided that the lowreturn
region is precisely known, the algorithm yields a superior restoration compared to other autofocus methods.
Fourier-domain MCA (FMCA) is a generalization of this algorithm that works for practical ranges of look angles.
However, both MCA and FMCA assume a planar wavefront, which makes them inapplicable to near-field imaging
scenarios where there is a significant amount of wavefront curvature.
We propose an autofocus algorithm that builds upon MCA, with a modification that takes into account
wavefront curvature. In this setting, the demodulated data can no longer be interpreted as 2-D Fourier samples
of the underlying image. Therefore, we make use of the linear relationship between the correction filter and the
reconstructed image via Convolution Backprojection (CBP) along curves.
Under the far-field assumption, our algorithm is equivalent to FMCA with a Jacobian-weighted 2-D periodic
sinc-kernel interpolator when the presumed low-return regions are the same. However, our algorithm has the distinct
advantage of being able to select the presumed low-return region within a continuous set of coordinates. We
present simulation results showing that our algorithm outperforms other algorithms for the case with wavefront
curvature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for obtaining windowing functions for focused range-doppler imaging of rotation objects is given. Focused
range-doppler imaging of rotation objects involves the evaluation of a non-standard transform rather than the discrete
Fourier transform. As such, familiar windowing functions such as Hamming are inappropriate for sidelobe control.
Previous methods of evaluation involve resampling and interpolation to emulate the discrete Fourier transform. In this
paper, a correction factor applied to any standard window function is derived and results shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
General purpose graphical processing units or GPGPUs have emerged in recent years as the
power horse behind many large scale computing efforts. For example, the recent unveiling of the
world's fastest supercomputer has achieved this feat by utilizing low cost and high performance
GPGPUs. Additionally, in the past year the synthetic aperture radar (SAR) community has
started to utilize GPGPUs as well. The utilization of GPGPUs to date has been limited mainly to
SAR image formation and in this capacity tremendous performance improvements over the same
CPU based algorithms have been demonstrated. However, image formation is only one of many
necessary steps towards SAR image exploitation. Image registration, filtering, interpolation and
interferometric flattening are equally important steps in obtaining many of the desired output
products such as coherence change detection (CCD) products and terrain adjusted
interferograms. We will demonstrate that by transitioning the entire SAR image exploitation
processing chain from image formation through product generation onto a GPGPU, it is possible
to achieve more than an order of magnitude in performance improvements. In this paper we will
review results presented at last year's SPIE conference regarding SAR image formation and
present new results obtained for coherent exploitation of SAR data including CCD and
interferometric SAR processing. In addition to presenting these results, we will discuss
challenges associated with migration of CPU-based exploitation algorithms to the GPGPU
environment, as well as to discuss possible future improvements using these powerful new
devices and associated software tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes several alternative techniques for detecting and localizing slowly-moving targets in cultural
clutter using synthetic aperture radar (SAR) data. Here, single-pass data is jointly processed from two or more
receive channels which are spatially offset in the along-track direction. We concentrate on two clutter cancelation
methods known as the displaced phase center antenna (DPCA) technique and along-track SAR interferometry
(AT-InSAR). Unlike the commonly-used space-time adaptive processing (STAP) techniques, both DPCA and
AT-InSAR tend to perform well in the presence of non-homogeneous urban or mountainous clutter. We show,
mathematically, the striking similarities between DPCA and AT-InSAR. Furthermore, we demonstrate using
experimental SAR data that these two techniques yield complementary information, which can be combined into
a "hybrid" technique that incorporates the advantages of each for significantly better performance. Results are
generated using the Gotcha challenge data, acquired using a three-channel X-band spotlight SAR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider moving target detection and velocity estimation for multi-channel synthetic aperture radar (SAR)
based ground moving target indication (GMTI). Via forming velocity versus cross-range images, we show that
small moving targets can be detected even in the presence of strong stationary ground clutter. Furthermore,
the velocities of the moving targets can be estimated, and the misplaced moving targets can be placed back
to their original locations based on the estimated velocities. An iterative adaptive approach (IAA), which is
robust and user parameter free, is used to form velocity versus cross-range images for each range bin of interest.
Moreover, we discuss calibration techniques to combat near-field coupling problems encountered in practical
systems. Furthermore, we present a sparse signal recovery approach for stationary clutter cancellation. We
conclude by demonstrating the effectiveness of our approaches by using the Air Force Research Laboratory
(AFRL) publicly-released Gotcha airborne SAR based GMTI data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a hierarchical Bayes model for multiple-pass, multiple antenna synthetic aperture radar
(SAR) systems with the goal of adaptive change detection. The model is based on decomposing the observed data
into a low-rank component and a sparse component, similar to Robust Principal Component Analysis, previously
developed by Ding, He, and Carin1 for E/O systems. The developed model also accounts for SAR phenomenology,
including antenna and spatial dependencies, speckle and specular noise, and stationary clutter. Monte Carlo
methods are used to estimate the posterior distribution of the variables in the model. The performance of
the proposed method is analyzed using synthetic images, and it is shown that the performance is robust to a
large space of operating characteristics without extensive tuning of hyperparameters. Finally, the method is
applied to measured SAR data, providing competitive results compared to standard methods with the additional
benefits of uncertainty characterization through a posterior distribution, explicit estimates of both foreground
and background components, and flexibility in including other sources of information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the performance of single-channel SAR-GMTI systems in the focusing and detection
of translating ground targets moving in the presence of a clutter background. Specifically, focusing and detection
performance is investigated by applying the Moving Grid Processing (MGP) focusing technique to a scene
containing an accelerating target moving in the presence of both uniform and correlated K-distributed clutter
backgrounds. The increase in detection sensitivity resulting from the focusing operation is found to result from
two separable effects, target focusing and clutter defocusing. While the detection sensitivity gain due to target
focusing is common for both clutter types, the gain due to clutter defocusing is found to be significantly greater
for textured clutter than for uniform clutter, by approximately 5 to 6 dB in the simulated scenario under consideration.
This paper concludes with a discussion of the phenomenological causes for this difference and implications
of this finding for single channel SAR-GMTI systems operating in heterogeneous clutter environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops the theory for waveform-diverse moving-target synthetic-aperture radar. We assume that
the targets are moving linearly, but we allow an arbitrary flight path and (almost) arbitrary waveforms. We
consider the monostatic case, in which a single antenna phase center is used for both transmitting and receiving.
This work addresses the use of waveforms whose duration is sufficiently long that the targets and/or platform
move appreciably while the data is being collected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel passive image formation method for moving targets using distributed apertures capable
of exploiting information about multiple-scattering in the environment. We assume that the environment is
illuminated by non-cooperative transmitters of opportunity with unknown location and unknown transmitted
waveforms. We develop a passive measurement model that relates the scattered field from moving targets at a
given receiver to the scattered field at other receivers. We formulate the passive imaging problem as a generalized
likelihood ratio test for a hypothetical target located at an unknown position, moving with an unknown velocity.
We design a linear discriminant functional by maximizing the Signal-to-Noise Ratio (SNR) of the test-statistic,
and use the resulting position- and velocity-resolved test-statistic to form the image. Our imaging method can
determine the two- or three-dimensional velocity vector as well as the two- or three-dimensional position vector
of a moving target without the knowledge of transmitter locations and transmitted waveforms. We present
numerical experiments to demonstrate the performance of our passive imaging method operating in multiplescattering
environments. The results show that the point spread function of the reconstructed images improves
when the information about multiple scattering is exploited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurement times for synthetic aperture radar (SAR) image collection can take from the order of seconds to minutes
and consequently the technique is subject to imaging artefacts due to target motion. For example, imaged moving targets
can be displaced and unfocussed and similarly for vibrating targets. Current understanding of this phenomenon is
somewhat esoteric however this paper puts forward and demonstrates a visual explanation via the physics of modulated
scatterer SAR images in the Fourier domain. This novel approach has led to an imagery analyst aid which associates a
distinctive signature to modulated scatterer artefacts in SAR imagery and to an associated filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a low-complexity method for compression of raw Synthetic Aperture Radar (SAR) data. Raw SAR
data is typically acquired using a satellite or airborne platform without sufficient computational capabilities
to process the data and generate a SAR image on-board. Hence, the raw data needs to be compressed and
transmitted to the ground station, where SAR image formation can be carried out. To perform low-complexity
compression, our method uses 1-dimensional transforms, followed by quantization and entropy coding. In contrast
to previous approaches, which send uncompressed or Huffman-coded bits, we achieve more efficient entropy
coding using an arithmetic coder that responds to a continuously updated probability distribution. We present
experimental results on compression of raw Ku-SAR data. In those we evaluate the effect of the length of
the transform on compression performance and demonstrate the advantages of the proposed framework over a
state-of-the-art low complexity scheme called Block Adaptive Quantization (BAQ).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Being able to recognize one object from another is vital research to our society because it can save lives, improve
national security, and improve existing technology such as object avoidance, tracking, etc. In this research we are
trying to classify Synthetic Aperture Radar (SAR) images of vehicles from one another no matter if the vehicle
is rotated or occluded. The dataset that is being used for this research is the Commercial Vehicle (CV) Data
Domes obtained fromWright Patterson Air Force Base (WPAFB). To accomplish this task we used Local Feature
Extraction (LFE) to extract the features and then K-nearest neighbor (KNN) was used to classify the vehicles.
Overall this method performed well in that the algorithm was able to correctly identify the vehicles 97.6% to
100% accuracy. Currently the algorithm can not handle translation, so the next step of this research is to be
able to use the glint information to register the vehicles to a desired location and then perform our algorithm
which we believe that registering the image would have a significant improvement to the current results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase fluctuation is one of the inherent characteristics for complex radar targets.The primary objective of this work is to
compare the phase fluctuation characteristics of ships on sea surface with that of the sea clutter based on complex high
resolution range profiles (HRRPs). The statistics of the HRRP phase gradient are studied using alpha-stable distribution.
Numerical simulation results show that the HRRP phase gradient of a ship on sea surface behave significantly different
from that of the sea clutter, suggesting that the statistics of HRRP phase gradient provide useful information for ship
discrimination from sea clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of Coherent Change Detection (CCD) performance prediction is described based on (1) a simple analysis of
the frequency support overlap between the two collects which encapsulates imaging geometry effects and (2) a method
of relating environmental effects to average scatterer disturbance which can be assessed empirically. The strength of this
approach is that, once the average disturbance for a particular environmental effect has been established from one
system, it can be extrapolated to all other systems since mean disturbance is system-independent. Validation and
application of this approach using simulated examples is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an image quality metric and prediction model for SAR imagery that addresses automated information
extraction and exploitation by imagery analysts. This effort drarws on our team's direct experience with the development
of the Radar National Imagery Interpretability Ratings Scale (Radar NIIRS), the General Image Quality Equations
(GIQE) for other modalities, and extensive expertise in ATR characterization and performance modeling. In this study,
we produced two separate GIQEs: one to predict Radar NIIRS and one to predict Automated Target Detection (ATD)
performance. The Radar NIIRS GIQE is most significantly influenced by resolution, depression angle, and depression
angle squared. The inclusion of several image metrics was shown to improve performance. Our development of an ATD
GIQE showed that resolution and clutter characteristics (e.g., clear, forested, urban) are the dominant explanatory
variables. As was the case with NIIRS GIQE, inclusion of image metrics again increased performance, but the
improvement was significantly more pronounced. Analysis also showed that a strong relationship exists between ATD
and Radar NIIRS, as indicated by a correlation coefficient of 0.69; however, this correlation is not strong enough that we
would recommend a single GIQE be used for both ATD and NIIRS prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The target classification algorithm community is making a special effort to explicitly treat operating conditions
(OCs) in classifier assessments and performance modeling. This is necessary because humans do not intuitively
appreciate what makes classification difficult for computers-it just seems so easy to us. In analyzing OCs, some
OCs are more direct or primitive while others are more abstract or integrating. These more abstract or "Derived
OCs" provide an intermediate step between direct OCs and classifier performance. Similar to the target, sensor,
environment partition of OCs, the AFRL COMPASE Center introduces the "Mossing 3" partition of derived OCs
into "Clarity," "Uniqueness," and "Conformity." Clarity is primarily concerned with the relevant information
content available in the sensor data. Uniqueness is about the inherent separability between the types of objects
to be classified (i.e., the library) and between all those types and objects not known to the classifier. Conformity
is about the relationship between the OCs of the test instances and the OCs represented in the library types
or training data. Furthermore, by analyzing derived OCs from multiple perspectives, informative subpartitions
of the Mossing 3 are created. Clarity measures are well developed, particularly as image quality metrics. The
other partitions are less well developed, but relevant work exists and is brought into context. While derived OCs
and the Mossing 3 partition are not a complete solution to performance modeling, they help bring in powerful
existing technologies and should enrich and facilitate dialogue on classifier performance theory and modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a novel joint sparse representation based automatic target recognition (ATR) method
using multiple views, which can not only handle multi-view ATR without knowing the pose but also has the
advantage of exploiting the correlations among the multiple views for a single joint recognition decision. We cast
the problem as a multi-variate regression model and recover the sparse representations for the multiple views
simultaneously. The recognition is accomplished via classifying the target to the class which gives the minimum
total reconstruction error accumulated across all the views. Extensive experiments have been carried out on
Moving and Stationary Target Acquisition and Recognition (MSTAR) public database to evaluate the proposed
method compared with several state-of-the-art methods such as linear Support Vector Machine (SVM), kernel
SVM as well as a sparse representation based classifier. Experimental results demonstrate that the effectiveness
as well as robustness of the proposed joint sparse representation ATR method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional target recognition approaches for SAR include template matching and feature-based classification.
However, unlike visual imagery, Synthetic Aperture Radar (SAR) presents a unique challenge in that many
attributes, such as scattering centers, are extremely pose dependent and wink in and out with even minor
viewing geometry changes. This work implements a highly efficient biologically-inspired 3D template-based
approach, the Map Seeking Circuit (MSC) algorithm, for target recognition in SAR. Instead of exhaustively
searching a high dimensional state space, the MSC algorithm efficiently searches a superposition hypersurface to
estimate target location and 3D pose. Results are shown from applying the algorithm to real SAR datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological operators are commonly used in image processing. We study their suitability for use in synthetic
aperture radar (SAR) image enhancement and target classification. Morphological operations are nonlinear
operators defined by set theory. The dilation and erosion operations grow or shrinkimage features that match to
a predefined structuring element. The opening and closing operations are combinations of successive dilation and
erosion. These morphological operations can visually emphasize scattering of interest in an image. We investigate
whether these operations can also improve target classification performance. The operators are nonlinear and
image dependent; thus we cannot predict performance without empirical testing. We test and evaluate the
morphological operators using simulated and measured SAR data. Results show the dilation operator is most
promising for increasing match score and separation between classes in the decision space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic target recognition (ATR) performance of SAR with subsampled raw data is investigated in this
paper. Two schemes are investigated. In scheme A, SAR images are reconstructed from subsampled data by
applying compressed sensing (CS) techniques and then targets are classified using either the mean-squared error
(MSE) classifier or the point-feature-based classifier. Both classifiers recognize a target by using the magnitude
information of dominant scatterers in the image. They fit nicely with the CS framework considering that CS
approaches can efficiently recover the bright pixels in SAR images. In scheme B, the smashed-filter classifier
is employed without image formation. Instead it makes the classification decision by directly comparing the
observed subsampled data with data simulated from reference images. The impact of various subsampling
patterns on ATR is investigated since CS theory suggests that some patterns lead to better performance than
others. Simulation results show that compared with images formed by the conventional SAR imaging algorithm,
CS reconstructed images always lead to much higher recognition rates for both the classifiers in scheme A. The
MSE classifier works better than the point-feature-based classifier because the former takes into account both the
magnitudes and locations of bright pixels while the latter uses the locations only. The smashed-filter classifier
is computationally efficient and can accurately recognize a target even with strong subsampling if appropriate
reference images are available. Its application in practice is difficult because it is sensitive to the phases of
complex-valued SAR images, which vary too much for different observation angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.