PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6568, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a
complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment
conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information
about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information
about the location of the scattering centers due to a non planar collection geometry. Three dimensional
imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to
the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR
data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about
target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric
imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We
observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to
nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique
that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using
the estimated model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For synthetic aperture radar (SAR) systems utilizing a circular aperture for target recognition, it is important to know
how a target's point spread function (PSF) behaves as a function of various radar functional parameters and target
positional changes that may occur during data collection. The purpose of this research is characterizing the three
dimensional (3D) point spread function (3D PSF) behavior of a radially displaced point scatterer for circular synthetic
aperture radar (CSAR). For an automatic target recognition (ATR) systems requiring target identification with a high
degree of confidence, CSAR processing represents a viable alternative given it can produce images with resolution less
than a wavelength. With very large CSAR apertures (90°r; or more) three dimensional imaging is possible with a single
phase center and a single pass. Using a backprojection image formation process, point target PSF responses are
generated at various target locations at a given radar bandwidth, depression angle and full 360°r; CSAR apertures.
Consistent with previous studies, the 3D PSF for a point target located at the image center is cone shaped and serves as
the basis for comparing and characterizing the 3D PSFs for radially displaced scatterers. For radially displaced point
target, simulated results show 3D PSF response is asymmetric and tends to become an elliptic shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar resolution in three dimensions is considered for circular synthetic apertures at a constant elevation angle.
A closed-form expression is derived for the far-field 3-D point spread function for a circular aperture of 360 degrees
azimuth and is used to revisit the traditional measures of resolution along the x, y and z spatial axes. However,
the limited angular persistence of reflectors encountered in practice renders the traditional measures inadequate
for circular synthetic aperture radar imaging. Two alternative measures for 3-D resolution are presented: a
nonparametric measure based on level sets of a reflector's signature and a statistical measure using the Cramer-
Rao lower bound on location estimation error. Both proposed measures provide a quantitative evaluation of
3-D resolution as a function of scattering persistence and radar system parameters. The analysis shows that
3-D localization of a reflector requires a combination of large radar cross section and large angular persistence.
In addition, multiple elevations or a priori target scattering models, if available, may be used to significantly
enhance 3-D resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines a concept for exploiting UAV (Unmanned Aerial Vehicle) trajectories for detecting slowly moving
targets. All the analysis and simulation results are reported under the assumption of a circular UAV trajectory with
various degrees of localized perturbations in the neighborhood of a given circular trajectory. These trajectory
perturbations are introduced and investigated in order to develop intelligent processing algorithms for purposes of
detecting slowly moving targets. The basic concept is based on collecting sub-apertures of data over a given set of
localized trajectories and intelligently parsing the collected data based on time-varying angle estimates between the
localized UAV trajectory and subsets of a collection of moving point targets. This parsed data is intelligently combined
over large SAR integration sub-intervals and intervals to develop a novel approach to detecting moving targets with large
variations in speed and target trajectory. Simulation results are reported for three different trajectory perturbation
functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hitchhiker is a passive radar receiver that relies on sources of opportunity to perform radar tasks.1-4 In this paper, we consider a synthetic-aperture radar (SAR) system with static non-cooperative transmitters
and mobile receivers traversing arbitrary trajectories and present an analytic image formation
method. Due to its combined synthetic aperture and hitchhiking structure, we refer to the system
under consideration as synthetic aperture hitchhiker (SAH). Our approach is applicable to cooperative
and/or non-cooperative and static and/or mobile sources of opportunity.
Conventional SAR processing involves correlation of the received signal from a receiver with the
transmitted waveform as a first step of the image formation. For passive SAR, however, the transmitted
waveform is not necessarily known. Instead, we use spatio-temporal correlation of received signals.
Given a pair of receivers, the spatio-temporal correlation method compares the received signals to
identify a target within the illuminated scene. We combine this with microlocal techniques to develop
a filtered backprojection (FBP) type inversion method for passive SAR5. Combined correlation-FBP inversion method does not require the knowledge of the transmitter locations.
Furthermore, FBP inversion has the advantage of computational efficiency and image formation
under non-ideal conditions, such as arbitrary flight trajectories and non-flat topography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reconstruction algorithms for monostatic synthetic aperture radar (SAR) with poor antenna directivity
traversing straight and arbitrary flight trajectories have been developed by various authors1-5, while, to
our knowledge, the acquisition geometry of bistatic SAR studies for the case of poor antenna directivity
are limited to isotropic antennas traversing certain flight trajectories (straight6,7 or circular8,9 flight
trajectories) over flat topography.
In this paper, we present an approximate analytic inversion method for bistatic SAR (Bi-SAR).10 In
particular, we present a new filtered-backprojection (FBP) type Bi-SAR inversion method for arbitrary,
but known, flight trajectories over non-flat, but known, topography. These FBP type reconstruction
methods have the advantage that they produce images that have the edges of the scene at the correct
location, orientation and strength. We demonstrate the performance of the new method via numerical
simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-channel synthetic aperture radar (SAR) can provide high quality, focused images of moving targets by utilizing
advanced SAR-GMTI techniques that focus all constant velocity targets into a three-dimensional space indexed by
range, cross-range and cross-range velocity. However, an inherent geolocation ambiguity exists in that multiple, distinct
moving targets may posses identical range versus time responses relative to a constant velocity collection platform.
Although these targets are uniquely located within a four-dimensional space (x-position, y-position, x-velocity, and y-velocity),
their responses are focused and mapped to the same three-dimensional position in the SAR-GMTI image cube.
Previous research has shown that circular SAR (CSAR) collection geometry is one way to break this ambiguity and
creates a four-dimensional detection space. This research determines the target resolution available in the detection
space as a function of different collection parameters. A metric is introduced to relate the resolvability of multiple target
responses for various parametric combinations, i.e., changes in key collection parameters such as integration time, slant
range, look angle, and carrier frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sparse-aperture imaging problem arises in synthetic aperture radar (SAR) when parts of the phase history data
are corrupted or incomplete. The resulting images reconstructed from the sparse aperture SAR are degraded
with elevated sidelobes. One effective method for enhancing these images has been nonquadratic regularization.
Nonquadratic regularization employs a cost function which contains an image formation error term and a feature
enhancement term. In the past, a quasi-Newton algorithm was applied to minimize the nonquadratic regularization
cost function. Two alternatives employ the stochastic gradient method to minimize the nonquadratic
regularization cost function. In this paper, these three algorithms based on the nonquadratic regularization cost
function are applied to corrupted phase history data and evaluated based on output image quality and time
required for image generation and enhancement. The phase history data will be from the Xpatch simulated
backhoe data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider the problem of joint enhancement of multichannel Synthetic Aperture Radar (SAR)
data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for image enhancement
using sparsity enforcing penalty terms. For multichannel data, independent enhancement of each channel is
shown to degrade the relative phase information across channels that is useful for 3D reconstruction. We thus
propose a method for joint enhancement of multichannel SAR data with joint sparsity constraints. We develop
both a gradient-based and a Lagrange-Newton-based method for solving the joint reconstruction problem, and
demonstrate the performance of the proposed methods on IFSAR height extraction problem from multi-elevation
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data
set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was
collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR
mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data,
auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating
the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging
techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution
3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition
applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band
SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational methods for electromagnetic scattering prediction have been an invaluable tool to the radar signal
exploitation community. Scattering prediction codes can provide simulated data of varied levels of fidelity at a
fraction of the cost of measured data. Software based on physical optics theory is presently the tool of choice for
generating high-frequency scattering data. Currently available codes have extensive capabilities but are usually
restricted in their distribution or application due to government or proprietary concerns and due to platform
specific software designs. The Raider Tracer software, described in this paper, is a MATLAB-based scattering
prediction code that was developed for open distribution to the broader research community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In conjunction with measures for target detection and classification as well as for the corresponding counteractive
measures spatially resolved radar signatures are of high importance. To allow the implementation of different
techniques for radar backscattering characterization an indoor measurement range was realized. It uses a hall of size 35
x 20 x 8 m3 which is equipped with a 7-m-diameter target turntable with 70-tons capacity. A crane allows the antennas
to be moved along horizontal paths with very high accuracy (0.1mm). In this range different measurement systems,
related to different methods for target characterization were realized. This contribution reviews the most important
features of the employed concepts and provides a critical comparison as well as a discussion about the limitations of
these approaches. All concepts aim for a decomposition of the radar backscattering into contributions assigned to
substructures being considerably smaller than the overall size of the target. For microwave frequencies (e. g. X-band) a
3-D-ISAR approach provides resolution cells with linear dimensions of about 1-2 wavelengths and a corresponding
deterministic target model. For the millimeter-wave regime (W-band) an alternative approach based on directive
antennas and time-gating was implemented and provides the parameters of a spatially resolved stochastic target model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inverse synthetic aperture radar (ISAR) imaging techniques based on indoor near field backscattering measurements
turns out to be a powerful tool for diagnostic purposes in radar cross-section (RCS) reduction and for deriving RCS
target models, viable for radar systems operating at larger distances, e.g. under far field conditions. This paper presents
an advanced 3-D imaging approach, where in addition to the turntable rotation the antenna is moved along a linear path
chosen in accordance with the geometry of the target and the aspect angle of interest. For reconstructing the reflectivity
distribution a configuration-specific grid of spatial sampling points is employed which reduces the complexity of
determining correct values for the scattering amplitudes. The reflectivity distribution reproduces the backscattering seen
from an antenna moved along a finite surface (synthetic 2-D-aperture) in the scattering near field of the target, but is to
be used to model backscattering for antennas at larger distances, e.g. in the far field. Therefore, the feasibility of this
approach is discussed with respect to different applications, i.e. for the diagnostic of RCS reduction and for
deterministic or statistical RCS models. Results obtained for a car as X-band radar target are presented in order to verify
the features of the imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to present an end-to-end simulator for spaceborne high-resolution SAR systems that is
capable of simulating realistic raw data and focused images of extended three-dimensional scenes. The simulator is
based on precise mathematical modeling of an overall SAR system chain and generates information on the quality of the
image data and its suitability to interpret target and background signatures. The principal components of the simulator
are: - the generation of an extended scene, including the fully polarimetric scattering behavior of the three-dimensional
surfaces, man made objects, and the typical SAR effects like overlay, speckle noise, shadowing;-an accurate SAR
sensor simulation (antenna, transmit and receive path);-the generation of the raw data depending on the desired SAR
mode (stripmap, spotlight mode);-the image processing and evaluation. The flexible and modular structure allows for
adjustment and extension to fulfill different tasks. The most important modules reflecting the basic physical models will
be described and simulation results will be demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an algorithm for target validation using 3-D scattering features. Building a high fidelity
3-D CAD model is a key step in the target validation process. 3-D scattering features were introduced previously [1] to
capture the spatial and angular scattering properties of a target. The 3-D scattering feature set for a target is obtained by
using the 3-D scattering centers predicted from the shooting and bouncing ray technique, and establishing a
correspondence between the scattering centers and their associated angular visibility. A 3-D scattering feature can be
interpreted to be a matched filter for a target, since the radar data projected onto the feature are matched to the spatial
and angular scattering behavior of the target. Furthermore, the 3-D scattering features can be tied back to the target
geometries using the trace-back information computed during the extraction process. By projecting the measured radar
data onto a set of 3-D scattering features and examining the associated correlations and trace-back information, the
quality of the 3-D target CAD model used for synthetic signature modeling can be quantified. The correlation and traceback
information can point to regions of a target that differ from the 3-D CAD model. Results for the canonical Slicy
target using the algorithm are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The miniature SAR-system MiSAR has been developed by EADS Germany for lightweight UAVs like the LUNASystem.
MiSAR adds to these tactical UAV-systems the all-weather reconnaissance capability, which is missing until
now. Unlike other SAR sensors, that produce large strip maps at update rates of several seconds, MiSAR generates
sequences of SAR images with approximately 1 Hz frame rate.
photo interpreters (PI) of tactical drones, now mainly experienced with visual interpretation, are not used to SARimages,
especially not with SAR-image sequence characteristics. So they should be supported to improve their ability to
carry out their task with a new, demanding sensor system. We have therefore analyzed and discussed with military PIs in
which task MiSAR can be used and how the PIs can be supported by special algorithms.
We developed image processing- and exploitation-algorithms for such SAR-image sequences. A main component is the
generation of image sequence mosaics to get more oversight. This mosaicing has the advantage that also non straight
/linear flight-paths and varying squint angles can be processed. Another component is a screening-component for manmade
objects to mark regions of interest in the image sequences. We use a classification based approach, which can be
easily adapted to new sensors and scenes. These algorithms are integrated into an image exploitation system to improve
the image interpreters ability to get a better oversight, better orientation and helping them to detect relevant objects,
especially considering long endurance reconnaissance missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The polar format algorithm (PFA) is a well known method for forming imagery in both the radar community and the
medical imaging community. PFA is attractive because it has low computational cost, and it partially compensates for
phase errors due to a target's motion through resolution cells (MTRC). Since the imaging scenarios for remote sensing
and medical imaging are traditionally different, the PFA implementation is different between the communities. This
paper describes the differences in PFA implementation. The performance of two illustrative implementations is
compared using synthetic radar and medical imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fundamental issue with synthetic aperture radar (SAR) application development is data processing and exploitation in
real-time or near real-time. The power of high performance computing (HPC) clusters, FPGA, and the IBM Cell
processor presents new algorithm development possibilities that have not been fully leveraged. In this paper, we will
illustrate the capability of SAR data exploitation which was impractical over the last decade due to computing
limitations. We can envision that SAR imagery encompassing city size coverage at extremely high levels of fidelity
could be processed at near-real time using the above technologies to empower the warfighter with access to critical
information for the war on terror, homeland defense, as well as urban warfare.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of airborne multi-channel radar data to cancel the clutter near moving ground targets can be
accomplished through Doppler filtering, with displaced phase center antenna (DPCA) techniques, or by space-time
adaptive processing (STAP). Typical clutter suppression algorithms recently developed for moving ground targets were
designed to function with two-channel displaced phase center radar data. This paper reviews the implementation of a
two-channel clutter cancellation approach used in the past (baseline technique), discusses the development of an
improved two-channel clutter cancellation algorithm, and extends this technique to three-channel airborne radar data.
The enhanced performance of the improved dual channel method is expanded upon by exploiting the extra information
gained from a third channel. A significant improvement between the moving target signature level and the surrounding
clutter level was obtained with the multi-channel signal subspace (MSS) algorithm when comparing results from dualchannel
and three-channel clutter suppression to the baseline two-channel technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The idea of preconditioning transmit waveforms for optimal clutter rejection in radar imaging is presented.
Waveform preconditioning involves determining a map on the space of transmit waveforms, and then applying this
map to the waveforms before transmission. The work applies to systems with an arbitrary number of transmitand
receive-antenna elements, and makes no assumptions about the elements being co-located. Waveform
preconditioning for clutter rejection achieves efficient use of power and computational resources by distributing
power properly over a frequency band and by eliminating clutter filtering in receive processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a Multi-Frequency Space-Time Orthogonal (MF-STOP) adaptive filtering approach for detection and discrimination of targets based on a two stage orthogonal projection whereby target parameters can be extracted in the presence of heavy clutter and noise. The proposed technique detects targets within heavy clutter tracked by a radar system. After targets are detected, motion information is extracted that can be used to discriminate threats such as reentry vehicles from other targets. Target detection is generated in stage one by a combination of Windowed Short Time Fast Fourier Transform (WSTFFT) processing and Principal Component Analysis (PCA). Target discrimination is done in a second stage via Partial Least Squares (PLS) using a training filter constructed from the stage one detection. The target is discriminated explicitly by metric criteria such as size or precession. These discriminate features do not have to be known a priori.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting regions of change in images of the same scene taken at different times is of widespread interest.
Important applications of change detection include video surveillance, remote sensing, medical diagnosis and
treatment. Change detection usually involves image registration, which is aimed at removing meaningless changes
caused by camera motion. Image registration is a hard problem due to the absence of knowledge about camera
motion and objects in the scene. To address this problem, this paper proposes a novel motion-segmentation
based approach to change detection, which represents a paradigm shift. Different from the existing methods,
our approach does not even need image registration since our method is able to separate global motion (camera
motion) from local motion, where local motion corresponds to regions of change while regions with only global
motion will be classified as 'no change'. Hence, our approach has the advantage of robustness against camera
motion.
Separating global motion from local motion is particularly challenging due to lack of prior knowledge about
camera motion and the objects in the scene. To tackle this, we introduce a motion-segmentation approach based
on minimization of the coding length. The key idea of our approach is as below. We first estimate the motion
field by solving the optical flow equation; then we segment the motion field into regions with different motion,
based on the minimum coding length criterion; after motion segmentation, we estimate the global motion and
local motion; finally, our algorithm outputs regions of change, which correspond to local motion. Experimental
results demonstrate the effectiveness of our scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Attributed scattering feature models have shown potential in aiding automatic target recognition and scene
visualization from radar scattering measurements. Attributed scattering features capture physical scattering
geometry, including the non-isotropic response of target scattering over wide angles, that is not discerned from
traditional point scatter models. In this paper, we study the identifiability of canonical scattering primitives
from complex phase history data collected over sparse nonlinear apertures that have both azimuth and elevation
diversity. We study six canonical shapes: a flat plate, dihedral, trihedral, cylinder, top-hat, and sphere, and
three flight path scenarios: a monostatic linear path, a monostatic nonlinear path, and a bistatic case with
a fixed transmitter and a nonlinear receiver flight path. We modify existing scattering models to account for
nonzero object radius and to scale peak scattering intensities to equate to radar cross section. Similarities in
some canonical scattering responses lead to confusion among multiple shapes when considering only model fit
errors. We present additional model discriminators including polarization consistency between the model and
the observed feature and consistency of estimated object size with radar cross section. We demonstrate that
flight path diversity and combinations of model discriminators increases identifiability of canonical shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution synthetic aperture radar images usually contain much redundant, noisy and irrelevant
information. Eliminating these information or extracting only useful information can enhance ATR
performance, reduce processing time and increase the robustness of the ATR systems. Most existing
feature extraction methods are either computationally expensive or can only provide ad hoc solutions
and have no guarantee of optimality. In this paper, we describe a new distance metric learning algorithm.
The algorithm is based on the local learning strategy and is formulated as a convex optimization
problem. The algorithm not only is capable of learning the feature significance and feature correlations
in a high dimensional space but also is very easy to implement with guaranteed global optimality.
Experimental results based on the MSTAR database are presented to demonstrate the effectiveness of
the new algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most commonly used smoothing algorithms for complex data processing are blurring functions (i.e., Hanning,
Taylor weighting, Gaussian, etc.). Unfortunately, the filters so designed blur the edges in a Synthetic Aperture Radar
(SAR) scene, reduce the accuracy of features, and blur the fringe lines in an interferogram. For the Digital Surface Map
(DSM) extraction, the blurring of these fringe lines causes inaccuracies in the height of the unwrapped terrain surface.
Our goal here is to perform spatially non-uniform smoothing to overcome the above mentioned disadvantages. This is
achieved by using a Complex Anisotropic Non-Linear Diffuser (CANDI) filter that is a spatially varying. In particular,
an appropriate choice of the convection function in the CANDI filter is able to accomplish the non-uniform smoothing.
This boundary sharpening intra-region smoothing filter acts on interferometric SAR (IFSAR) data with noise to produce
an interferogram with significantly reduced noise contents and desirable local smoothing. Results of CANDI filtering
will be discussed and compared with those obtained by using the standard filters on simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking is an important component of many computer vision systems. It is widely used in video surveillance,
robotics, 3D image reconstruction, medical imaging, and human computer interface. In this paper, we
focus on unsupervised object tracking, i.e., without prior knowledge about the object to be tracked. To address
this problem, we take a feature-based approach, i.e., using feature points (or landmark points) to represent
objects. Feature-based object tracking consists of feature extraction and feature correspondence. Feature correspondence
is particularly challenging since a feature point in one image may have many similar points in another
image, resulting in ambiguity in feature correspondence. To resolve the ambiguity, algorithms, which use exhaustive
search and correlation over a large neighborhood, have been proposed. However, these algorithms incur
high computational complexity, which is not suitable for real-time tracking. In contrast, Tomasi and Kanade's
tracking algorithm only searches corresponding points in a small candidate set, which significantly reduces computational
complexity; but the algorithm may lose track of feature points in a long image sequence. To mitigate
the limitations of the aforementioned algorithms, this paper proposes an efficient and robust feature-based tracking
algorithm. The key idea of our algorithm is as below. For a given target feature point in one frame, we first
find a corresponding point in the next frame, which minimizes the sum-of-squared-difference (SSD) between the
two points; then we test whether the corresponding point gives large value under the so-called Harris criterion.
If not, we further identify a candidate set of feature points in a small neighborhood of the target point; then find
a corresponding point from the candidate set, which minimizes the SSD between the two points. The algorithm
may output no corresponding point due to disappearance of the target point. Our algorithm is capable of tracking
feature points and detecting occlusions/uncovered regions. Experimental results demonstrate the superior
performance of the proposed algorithm over the existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne ground moving-target indication (GMTI) radar can track moving vehicles at large standoff distances.
Unfortunately, trajectories from multiple vehicles can become kinematically ambiguous, resulting in confusion
between a target vehicle of interest and other vehicles. We propose the use of high range resolution (HRR) radar
profiles and multinomial pattern matching (MPM) for target fingerprinting and track stitching to overcome
kinematic ambiguities.
Sandia's MPM algorithm is a robust template-based identification algorithm that has been applied successfully
to various target recognition problems. MPM utilizes a quantile transformation to map target intensity samples
to a small number of grayscale values, or quantiles. The algorithm relies on a statistical characterization of the
multinomial distribution of the sample-by-sample intensity values for target profiles. The quantile transformation
and statistical characterization procedures are extremely well suited to a robust representation of targets for HRR
profiles: they are invariant to sensor calibration, robust to target signature variations, and lend themselves to
efficient matching algorithms.
In typical HRR tracking applications, target fingerprints must be initiated on the fly from a limited number of
HRR profiles. Data may accumulate indefinitely as vehicles are tracked, and their templates must be continually
updated without becoming unbounded in size or complexity. To address this need, an incrementally updated
version of MPM has been developed. This implementation of MPM incorporates individual HRR profiles as they
become available, and fuses data from multiple aspect angles for a given target to aid in track stitching. This
paper provides a description of the incrementally updated version of MPM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding and organizing data, in particular understanding the key modes of variation in the data, is a first
step toward exploiting and evaluating sensor phenomenology. Spectral theory and manifold learning methods
have been recently shown to offer sever powerful tools for many parts of the exploitation problem. We will
describe the method of diffusion maps and give some examples with radar (backhoe data dome) data. The so-called
diffusion coordinates are kernel based dimensionality reduction techniques that can, for example, organize
random data and yield explicit insight into the type and relative importance of the data variation. We will
provide sufficient background for others to adopt these tools and apply them to other aspects of exploitation and
evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An object-image metric is an extension of standard metrics in that is constructed for matching and comparing configuration of object features to configurations of image features. For the generalized weak perspective camera, it is invariant to any affine transformation of the object or the image. Recent research in the exploitation of the object-image metric suggests new approaches to Automatic Target Recognition (ATR). This paper explores the object-image metric and its limitation. Through a series of experiments, we specifically seek to understand how the object-image metric could be applied to the image registration problem-an enabling technology for ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) performance models are needed for online adaptation and for effective use (e.g., in
fusion) of ATR products. We present empirical models focused on synthetic aperture radar (SAR) ATR algorithms.
These models are not ATR algorithms in themselves; rather they are models of ATRs developed with the intention of
capturing the behavior, at least on a statistical basis, of a reference ATR algorithm. The model covariates (or inputs)
might include the ATR operating conditions (sensor, target, and environment), ATR training parameters, etc. The
model might produce performance metrics (Pid, Pd, Pfa, etc.) or individual ATR decisions. "Scores" are an
intermediate product of many ATRs, which then go through a relatively simple decision rule. Our model has a parallel
structure, first modeling the score production and then mapping scores to model outputs. From a regression perspective,
it is impossible to predict individual ATR outcomes for all possible values of this covariate space since samples are only
available for small subsets of the total space. Given this limitation, and absent a purely theoretical model meaningfully
matched to the true complexity of this problem, our approach is to examine the empirical behavior of scores across
various operating conditions, and identify trends and characteristics of the scores that are apparently predictable. Many
of the scores available for training are in so-called standard operating conditions (SOC), and a far smaller number are in
so-called extended operating conditions (EOCs). The influence of the EOCs on scores and ATR decisions are examined
in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is believed that the LMS algorithm can be used to form a two dimensional image from radar return data. A version of
the LMS algorithm was written to form SAR images from radar phase history data. Images were formed from fifty sets
of twenty synthetically generated random points. The signal-to-noise ratio (SNR) was measured for each image and
showed an average of 30 dB. A more complex synthetic scene was also formed from a black and white JPEG image.
This image showed excellent qualitative results. Actual radar range data was collected from a set of vertical pins and
from a model car painted with conductive paint. The data was from a full 360-degree aperture. The resulting images
were of high quality; the vertical pins acted as pure point sources and indicated that the image had a resolution of 4.5
mm, which agrees with theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.