In this paper, we describe a new approach to non-coherent change detection for high resolution polarimetric synthetic aperture radar (polSAR) exploitation. In the high resolution setting, the reduced size of a resolution cell diminishes the applicability of central limit theorem arguments that lead to the traditional Gaussian backscatter models that underpin existing polSAR change detection algorithms. To mitigate this, we introduce a new model for polSAR data that combines generalized Gamma (GΓ) distributed marginals within a copula framework to capture the correlation dependency between multiple polSAR channels. Using the GΓ-copula model, a generalized likelihood ratio test (GLRT) is derived for detecting changes within high resolution polSAR imagery. Examples using measured data demonstrate the non-Gaussian nature of high resolution polSAR data and quantify a performance improvement when using the proposed GΓ-copula change detection framework.
In this paper, we consider change detection in the longwave infrared (LWIR) domain. Because thermal emission is the dominant radiation source in this domain, differences in temperature may appear as material changes and introduce false alarms in change imagery. Existing methods, such as temperature-emissivity separation and alpha residuals, attempt to extract temperature-independent LWIR spectral information. However, both methods remain susceptible to residual temperature effects which degrade change detection performance. Here, we develop temperature-robust versions of these algorithms that project the spectra into approximately temperatureinvariant subspaces. The complete error covariance matrix for each method is also derived so that Mahalanobis distance may be used to quantify spectral differences in the temperature-invariant domain. Examples using synthetic and measured data demonstrate substantial performance improvement relative to the baseline algorithms.
In this paper, we develop and evaluate change detection algorithms for longwave infrared (LWIR) hyperspectral imagery. Because measured radiance in the LWIR domain depends on unknown surface temperature, care must be taken to prevent false alarms resulting from in-scene temperature differences that appear as material changes. We consider two strategies to mitigate this effect. In the first, pre-processing via traditional temperature-emissivity separation (TES) yields approximately temperature-invariant emissivity vectors for use in change detection. In the second, we adopt a minimax approach that minimizes the maximal spectral deviation between measurements. While more computationally demanding, the second approach eliminates spectral density assumptions in traditional TES and provides superior change detection performance. Examples on synthetic and measured data quantify computational complexity and detection performance.
In this paper, we consider use of synthetic aperture radar (SAR) to provide absolute platform position information in scenarios where GPS signals may be degraded, jammed, or spoofed. Two algorithms are presented, and both leverage known 3D ground structure in an area of interest, e.g. provided by LIDAR data, to provide georeferenced position information to airborne SAR platforms. The first approach is based on the wide-aperture layover properties of elevated reflectors, while the second approach is based on correlating backprojected imagery with digital elevation imagery. Building on 3D backprojection, localization solutions result from non-convex optimization problems based on image sharpness or correlation measures. Results using measured GOTCHA data demonstrate localization errors of only a few meters with initial uncertainty regions as large as 16 km2.
KEYWORDS: Long wavelength infrared, Infrared sensors, Infrared radiation, Signal processing, Temperature metrology, Monte Carlo methods, Databases, Sensors, Error analysis, Data modeling
Signal processing for long-wave infrared (LWIR) sensing is made complicated by unknown surface temperatures in a scene which impact measured radiance through temperature-dependent black-body radiation of in-scene objects. The unknown radiation levels give rise to the temperature-emissivity separation (TES) problem describing the intrinsic ambiguity between an object’s temperature and emissivity. In this paper we present a novel Bayesian TES algorithm that produces a probabilistic posterior estimate of a material’s unknown temperature and emissivity. The statistical uncertainty characterization provided by the algorithm is important for subsequent signal processing tasks such as classification and sensor fusion. The algorithm is based on Markov chain Monte Carlo (MCMC) methods and exploits conditional linearity to achieve efficient block-wise Gibbs sampling for rapid inference. In contrast to existing work, the algorithm optimally incorporates prior knowledge about inscene materials via Bayesian priors which may optionally be learned using training data and a material database. Examples demonstrate up to an order of magnitude reduction in error compared to classical filter-based TES methods.
In this paper we present a new generalized family of change detection algorithms for SAR imagery that includes traditional non-coherent and coherent processing as special cases. The parameterized family of algorithms, referred to as partially coherent change detection (PCCD), allows the user to select the level of coherence desired in the change detection algorithm. This and other settings of the algorithm enable one to specify the types of changes that are significant, thereby reducing the number of false alarms due to insignificant changes—such as foliage motion. Algorithm settings may also be applied spatially in order to support spatially varying levels of coherence based on scene content. Examples from synthetic and measured imagery demonstrate the efficacy of the new family of algorithms.
Modern radar systems equipped with agile-beam technology support multiple modes of operation, including, for
example, tracking, automated target recognition (ATR), and synthetic aperture radar imaging (SAR). In a multimode
operating environment, the services compete for radar resources and leave gaps in the coherent collection
aperture devoted to SAR imaging. Such gapped collections, referred to as interrupted SAR, typically result in
significant image distortion and can substantially degrade subsequent exploitation tasks, such as change detection.
In this work we present a new form of exploitation that jointly performs imaging and coherent change detection
in interrupted environments. We adopt a Bayesian approach that inherently accommodates different interrupt
patterns and compensates for missing data via exploitation of 1) a partially coherent model for reference-pass to
mission-pass pixel transitions, and 2) the a priori notion that changes between passes are generally sparse and
spatially clustered. We employ approximate message passing for computationally efficient Bayesian inference
and demonstrate performance on measured and synthetic SAR data. The results demonstrate near optimal
(ungapped) performance with pulse loss rates up to ∼ 50% and highlight orders of magnitude reduction in false
alarm rates compared to traditional methods.
When acoustic signals are subject to measurement over large distances or extended periods of time, the environmental
conditions governing their propagation are unlikely to remain constant over the necessary spatial and
temporal extents. Relative to a static environment, such inhomogeneities may result in severe signal distortion,
such as non-linear warping, and can significantly degrade subsequent signal processing tasks such as classification
and time-delay estimation.
In this paper we 1) describe a set of experiments that were performed in order to collect space-time acoustic
propagation data for empirical modeling, paying particular attention to important experimental design issues
such as optimal sampling rates in the spatial domain, and 2) present a statistical two-dimensional model for inhomogeneous
environments that describes the space-time distribution of acoustic propagation velocity governing
low-frequency long-range propagation of aeroacoustic signals with long durations (several minutes). The model
includes a deterministic component to model structured changes (e.g., increasing temperature during morning
hours) and a stochastic component, specified by a two dimensional Gaussian random process, to capture correlated
random deviations. Cram´er-Rao bounds are presented as a means of evaluating and optimizing sensor
geometries for learning model parameters.
KEYWORDS: Sensors, Synthetic aperture radar, Fourier transforms, Detection and tracking algorithms, 3D acquisition, Reflectors, Algorithm development, Data modeling, Digital video discs
An airborne circular synthetic aperture radar system captured data for a 5 km diameter area over 31 orbits.
For this challenge problem, the phase history for 56 targets was extracted from the larger data set and placed
on a DVD for public release. The targets include 33 civilian vehicles of which many are repeated models,
facilitating training and classification experiments. The remaining targets include an open area and 22 reflectors
for scattering and calibration research. The circular synthetic aperture radar provides 360 degrees of azimuth
around each target. For increased elevation content, the collection contains two nine-orbit volumetric series,
where the sensor reduces altitude between each orbit. Researchers are challenged to further the art of focusing,
3D imaging, and target discrimination for circular synthetic aperture radar.
A new method for hyperspectral change detection derived from a parametric radiative transfer model was recently
developed. The model-based approach explicitly accounts for local illumination variations, such as shadows,
which act as a constant source of false alarms in traditional change detection techniques. Here we formally
derive the model-based approach as a generalized likelihood ratio test (GLRT) developed from the data model.
Additionally, we discuss variations on implementation techniques for the algorithm and provide results using
tower-based data and HYDICE data.
KEYWORDS: Sensors, Statistical analysis, Radon, Statistical modeling, Data modeling, Monte Carlo methods, Algorithm development, 3D modeling, Cadmium, Sensor networks
In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences
of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This
so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however
it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all
points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a
parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this
paper we study efficient solutions for the localization problem and identify deceleration models that trade off
localization accuracy and computational complexity. We also develop a statistical analysis that includes bias
due to mismatch between the true and actual deceleration models and covariance due to additive noise.
KEYWORDS: Scattering, Signal to noise ratio, 3D image processing, Synthetic aperture radar, Rayleigh scattering, Radar, Reconstruction algorithms, Data centers, Associative arrays, Detection and tracking algorithms
This paper addresses the question of scattering center detection and estimation performance in synthetic aperture
radar. Specifically, we consider sparse 3D radar apertures, in which the radar collects both azimuth and elevation
diverse data of a scene, but collects only a sparse subset of the traditional filled aperture. We use a sparse
reconstruction algorithm to both detect and estimate scattering center locations and amplitudes in the scene.
We quantify both the detection and estimation performance for scattering centers over a high dynamic range of
magnitudes. Over this wide range of scattering center signal-to-noise values, detection performance is compared
to GLRT detection performance, and estimation performance is compared to the Cramer-Rao lower bound.
The majority of pixel-level hyperspectral change detection algorithms have risen out of probabilistic models
developed for the data. These algorithms typically operate in two stages. In the first stage, the illumination
differences and other changes due to atmospheric and environmental conditions between the two scenes are
removed. In the second stage, a hypothesis test is performed on the difference between these normalized pixels.
These particular change detection methods often suffer due to local variability within the data. As an alternative
to these statistical-based change detection algorithms, this paper examines the use of a parametric physical model
towards change detection. For a single hyperspectral data set, the number of unknown parameters in the model
is greater than the number of measurements. However, if a second data set exists and the underlying material reflectance of each pixel is assumed to remain constant between the two, one can develop a problem for which the number of measurements is greater than the number of unknowns allowing for application of standard constrained optimization methods for parameter estimation. Assuming the validity of the physical model used, any residual error remaining after obtaining the optimal parameter estimates must result from noise or a violation of the reflectance assumption made, i.e., a change in material reflectance from time-1 to time-2. Accordingly, the fit error for each pixel is an indicator of reflectance change. Additionally, the proposed framework allows for incorporating spatial information at some later point. This paper provides a preliminary look at the proposed change detection method and associated challenges.
Combining moving target indication (MTI) radar with synthetic aperture radar (SAR) is of great interest to
radar specialists, in terms of improving multiple-target tracking in large, urban scenes. A major obstacle to such
a merger are ambiguities induced by mution. Using statistical bounds we quantify the improvement of moving
target localization with multi-channel SAR over single-channel SAR and the more traditional MTI technique of
displaced phase center array (DPCA) processing. We show that the potential for substantial improvements in
localization performance is borne out by practical estimators based on sparse reconstruction algorithms, whose
performance approach statistical bounds, even under clutter. We also outline a parallelization scheme for the
nonquadratic regularized sparse reconstruction technique to utilize clusters for processing large datasets.
Time-delay estimation (TDE) is a common requirement of the ranging and localization systems often found in
unattended ground sensors. In this paper we consider novel approaches to the TDE problem in a time-warping
acoustic environment, such as that encountered when the propagation velocity is not constant-due to wind
gusts, for example. An increasing propagation velocity induces a compression of the received signal, while a
diminishing velocity dilates the signal. These effects warp the shape of the received signal and can significantly
reduce the effectiveness of traditional TDE algorithms
This paper presents algorithms and performance bounds for TDE in random velocity environments. We
model unknown signal warping as a low-pass random process, which serves as a form of non-additive noise in the
time-delay estimation problem. For warping environments, we propose computationally efficient non-parametric
algorithms for TDE that significantly outperform traditional time-delay estimators, such as the location of the
sample cross-correlation peak. The Cram´er-Rao bound for TDE in a time-warping environment is also presented
and used to evaluate the proposed estimators. Simulations demonstrate the bounds and estimator performance
for acoustic signals.
We investigate a recursive procedure for synthetic aperture imaging. We consider a concept in which a SAR
system persistently interrogates a scene, for example as it flies along or around that scene. In traditional SAR
imaging, the radar measurements are processed in blocks, by partitioning the data into a set of non-overlapping
or overlapping azimuth angles, then processing each block. We consider a recursive update approach, in which
the SAR image is continually updated, as a linear combination of a small number of previous images and a
term containing the current radar measurement. We investigate the crossrange sidelobes realized by such an
imaging approach. We show that a first-order autoregression of the image gives crossrange sidelobes similar to
a rectangular azimuth window, while a third-order autoregression gives sidelobes comparable to those obtained
from widely-used windows in block-processing image formation. The computational and memory requirements
of the recursive imaging approach are modest - on the order of M • N2 where M is the recursion order (typically
≤ 3) and N2 is the image size. We compare images obtained from the recursive and block processing techniques,
both for a synthetic scene and for X-band SAR measurements from the Gotcha data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.