This work investigates the application of compressed sensing algorithms to the problem of novel view synthesis in synthetic aperture radar (SAR). We demonstrate the ability to generate new images of a SAR target from a sparse set of looks at said target, and we show that this can be used as a data augmentation technique for deep learning-based automatic target recognition (ATR). The newly synthesized views can be used both to enlarge the original, sparse training set, and in transfer learning as a source dataset for initial training of the network. The success of the approach is quantified by measuring ATR performance on the MSTAR dataset.
Attempts to use synthetic data to augment measured data for improved synthetic aperture radar (SAR) automatic target recognition (ATR) performance have been hampered by domain mismatch between datasets. Past work which leveraged synthetic data in a transfer learning framework has been successful but was primarily focused on transferring generic SAR features. Recently SAMPLE, a paired synthetic and measured dataset was introduced to the SAR community, enabling demonstration of good ATR performance using 100% synthetic data. In this work, we examine how to leverage synthetic data and measured data to boost ATR using transfer learning. The synthetic dataset corresponds to the MSTAR 15o dataset. We demonstrate that high quality synthetic data can enhance ATR performance even when substantial measured data is available, and that synthetic data can reduce measured data requirements by over 50% while maintaining classification accuracy.
KEYWORDS: Data modeling, Sensors, Error analysis, Data processing, Detection and tracking algorithms, Target detection, Electro optical modeling, Signal to noise ratio, Radar, Signal processing
This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels – e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.
Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.
Occlusions can degrade object tracking performance in sensor imaging systems. This paper describes a robust approach
to object tracking that fuses video frames with RF data in a Bayes-optimal way to overcome occlusion. We fuse data
from these heterogeneous sensors, and show how our approach enables tracking when each modality cannot track
individually. We provide the mathematical framework for our approach, details about sensor operation, and a
description of a multisensor detection and tracking experiment that fuses real collected image data with radar data.
Finally, we illustrate two benefits of fusion: improved track hold during occlusion and diminished error.
Tracking prominent scatterers provides a mechanism for scene-derived motion compensation of Synthetic Aperture
Radar (SAR) data. Such a process is useful in environments where GPS is unavailable and a lack of precise sensor
position data makes standard motion compensation difficult. Our approach to sensor positioning estimates range
histories of multiple isolated scatterers with high accuracy, then performs a geometric inversion to locate the scatterers in
three dimensions and estimate the platform's motion.
For high-accuracy scatterer range tracking, we first detect prominent scatterers using a CFAR criterion automatic
algorithm and then track them with a two-input Kalman Filter (KF) operation. These two steps provide accurate range
estimates of multiple scatterers over a sequence of SAR pulses. The KF state space is range and range-rate. We derive
data inputs to the KF algorithm from multiple SAR pulses, divided into Coherent Processing Intervals (CPI). Within
each CPI, individual scatterer peak amplitudes and phases are available to the algorithm.
Our approach to scene-derived motion compensation combines the high accuracy range history estimates with a novel
three-dimensional geometric inversion. This geometric inversion uses the range histories to estimate both 3D scatterer
location and 3D relative motions of the radar. We illustrate our KF-based approach to high-accuracy tracking and
demonstrate its application to estimating scene scatterer locations on synthetic and real collected SAR data.
This paper describes an experimental demonstration of a distributed, decentralized, low communication sensor
management algorithm. We first review the mathematics surrounding the method, which includes a novel combination
of particle filtering for predictive density estimation and information theory for maximizing information
flow. Earlier work has shown the utility via Monte Carlo simulations. Here we present a laboratory demonstration
to illustrate the utility and to provide a stepping stone toward full-up implementation. To that end, we
describe an inverted Unmanned Aerial Vehicle (UAV) test-bed developed by The General Dynamics Advanced
Information Systems (GDAIS) Michigan Research and Development Center (MRDC) to facilitate and promote
the maturation of the research algorithm into an operational, field-able system. Using a modular design with
wheeled robots as surrogates to UAVs, we illustrate how the method is able to detect and track moving targets
over a large surveillance region by tasking a collection of limited field of view sensors.
This paper describes a decentralized low communication approach to multi-platform sensor management. The
method is based on a physicomimetic relaxation to a joint information theoretic optimization, which inherits the
benefits of information theoretic scheduling while maintaining tractability. The method uses only limited message
passing, only neighboring nodes communicate, and each node makes its own sensor management decisions.
We show by simulation that the method allows a network of sensor nodes to automatically self organize
and perform a global task. In the model problem, a group of unmanned aerial vehicles (UAVs) hover above a
ground surveillance region. An initially unknown number of moving ground targets inhabit the region. Each
UAV is capable of making noisy measurements of the patch of ground directly below, which provide evidence as
to the presence or absence of targets in that sub-region. The goal of the network is to determine the number of
targets and their individual states (positions and velocities) in the entire surveillance region through repeated
interrogation by the individual nodes. As the individual nodes can only see a small portion of the ground, they
must move in a manner that is both responsive to measurements and coordinated with other nodes.
Self localization is a term used to describe the ability of a network to automatically determine the location of its nodes, given little or no external information. Self localization is an enabling technology for many future capabilities; specifically those that rely on a large number of sensors that self organize to form a coherent system. Most prior work in this area focuses on centralized computation with stationary nodes and synchronized clocks. We report on preliminary results for a setting that is more general in three ways. First, nodes in the network are moving. This implies the pair-wise distances between nodes are not fixed and therefore an iterative tracking procedure is needed to estimate the time varying node positions. Second, we do not assume synchronization between clocks on different nodes. In fact, we allow the clocks to have both an unknown offset and to be running at differing rates (i.e., a drift). Third, our method is decentralized, so there is no need for a single entity withfaccess to all measurements. In this setup, each node in the network is responsible for estimating its state.
The method is based on repeated pair-wise communication between nodes. We focus on two types of observables in this paper. First, we use the time between when a message was sent from one node and when it was received by another node. In the case of synchronized clocks and stationary nodes, this observable provides information about the distance between the nodes. In the more general case with non-synchronized clocks, this observable is coupled to the clock offsets and drifts as well as the distance between nodes. Second, we use the Doppler stretch observed by the receiving node. In the case of synchronized clocks, this observable provides information about the line of sight velocity between the nodes. In the case of non-synchronized clocks, this observable is coupled to the clock drift as well as the line of sight velocity. We develop a sophisticated mathematical representation that allows all of these effects to be accounted for simultaneously.
We approach the problem from a Bayesian viewpoint, where measurements are accumulated over time and used to form a probability density on the state, conditioned on the measurements. What results is a recursive filtering (or tracking) algorithm that optimally merges the measurements. We show by simulation and illustrative data collections that our method provides an efficient decentralized method for determining the location of a collection of moving nodes.
KEYWORDS: Particles, Detection and tracking algorithms, Signal to noise ratio, Sensors, Algorithm development, Monte Carlo methods, Surveillance, Particle filters, Target detection, Solids
Factors affecting the performance of an algorithm for tracking multiple targets observed using a pixelized sensor are studied. A pixelized sensor divides the surveillance region into a grid of cells with targets generating returns on the grid according to some known probabilistic model. In previous work an efficient particle filtering algorithm was developed for multiple target tracking using such a sensor. This algorithm is the focus of the study. The performance of the algorithm is affected by several considerations. The pixelized sensor model can be used with either thresholded or non-thresholded measurements. While it is known that information is lost when measurements are thresholded, quantitative results have not been established. The development of a tractable algorithm requires that closely-spaced targets are processed jointly while targets which are far apart are processed separately. Selection of the clustering distance involves a trade-off between performance and computational expense. A final issue concerns the computation of the proposal density used in the particle filter. Variations in a certain parameter enable a trade-off between performance and computational expense. The various issues are studied using a mixture of theoretical results and Monte Carlo simulations.
This paper shows how information-directed diffusion can be used to manage the trajectories of hundreds of smart mobile sensors. This is an artificial physics method in which the sensors move stochastically in response to an information gradient and artificial inter-sensor forces that serve to coordinate their actions.
Measurements received by the sensors are centrally fused using a particle filter to estimate the Joint Multitarget Probability Density (JMPD) for the surveillance volume. The JMPD is used to construct an information surface which gives the expected gain for sensor dwells as a function of position. The updated sensor position is obtained by moving it in response to artificial forces derived from the information surface, which acts as a potential, and inter-sensor forces derived from a Lennard-Jones-like potential. The combination of information gradient and inter-sensor forces work to move the sensors to areas of high information gain while simultaneously ensuring sufficient spacing between the sensors. We evaluate the performance of this approach using a simulation study for an idealized Micro Air Vehicle with a simple EO detector and collected target trajectories. We find that this method provides a factor of 5 to 10 improvement in performance when compared to random uncoordinated search.
This paper addresses the problem of tracking multiple moving targets by estimating their joint multitarget probability density (JMPD). The JMPD technique is a Bayesian method for tracking multiple targets that allows nonlinear, non-Gaussian target motions and measurement to state coupling. JMPD simultaneously estimates both the target states and the number of targets. In this paper, we give a new grid-free implementation of JMPD based on particle filtering techniques and explore several particle proposal strategies, resampling techniques, and particle diversification methods. We report the effect of these techniques on tracker performance in terms of tracks lost, mean squared error, and computational burden.
We present in this paper an information based method for sensor management that is based on tasking a sensor to make the measurement that maximizes the expected gain in information. The method is applied to the problem of tracking multiple targets. The underlying tracking methodology is a multiple target tracking scheme based
on recursive estimation of a Joint Multitarget Probability Density (JMPD), which is implemented using particle filtering methods. This Bayesian method for tracking multiple targets allows nonlinear, non-Gaussian target motion and measurement-to-state coupling. The sensor management scheme is predicated on maximizing the expected Renyi Information Divergence between the current JMPD and the JMPD after a measurement has been made. The Renyi Information Divergence, a generalization of the Kullback-Leibler Distance, provides a way to measure the dissimilarity between two densities. We use the Renyi Information Divergence to evaluate the expected information gain for each of the possible measurement decisions, and select the measurement that maximizes the expected information gain for each sample.
This paper describes the design and implementation of multiple model nonlinear filters (MMNLF) for ground target tracking using Ground Moving Target Indicator (GMTI) radar measurements. The MMNLF is based on a general theory of hybrid continuous-discrete dynamics. The motion model state is discrete and its stochastic dynamics are a continuous- time Markov chain. For each motion model, the continuum dynamics are a continuous-state Markov process described here by appropriate Fokker-Plank equations. This is illustrated here by a specific two-model MMNLF in which one motion model incorporates terrain, road, and vehicle motion constraints derived from battlefield observations. The second model is slow diffusion in speed and heading. The target state conditional probability density is discretized on a moving grid and recursively updated with sensor measurements via Bayes' formula. The conditional density is time updated between sensor measurements using Alternating Direction Implicit (ADI) finite difference methods. In simulation testing against low signal to clutter + noise Ratio (SNCR) targets, the MMNLF is able to maintain track in situations where single model filters based on either of the component models fail. Potential applications of this work include detection and tracking of foliage-obscured moving targets.
This paper describes a nonlinear filter for ground target tracking. Hospitability for maneuver derived from terrain, road and vehicle dynamics constraints is incorporated directly into the filter's motion model. The conditional probability density for the target state is maintained and updated with sensor measurements as soon as they become available. The conditional density is time updated between sensor measurements using finite difference methods. In simulations using square-law detected measurements the filter is able to track maneuvering ground targets when the Signal to Interference + Noise Ratio (SINR) os between 6 and 9 dB.
Experiments with the LOIS (Likelihood Of Image Shape) Lane detector have demonstrated that the use of a deformable template approach allows robust detection of lane boundaries in visual images. The same algorithm has been applied to detect pavement edges in millimeter wave radar images. In addition to ground vehicle applications involving lane sensing, the algorithm is applicable to airplane applications for tracking runways in either visual or radar data. Previous work on LOIS has focused on the problem of detecting lane edges in individual frames. This paper describes extensions to the LOIS algorithm which allow it to smoothly track lane edges through maneuvers such as lane changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.