PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Adaptive techniques for detecting small or difficult targets in the midst of high noise and/or clutter have a rich history in the radar array processing community. However, the utility of these schemes is only beginning to be realized for multichannel electro-optical techniques, specifically hyperspectral imaging (HSI). The data products generated by hyperspectral sensors differ greatly from those of radar and sonar arrays, yet recent studies using HSI data have offered promising results for modified versions of common adaptive detectors. In this paper, we compare a series popular adaptive detection schemes applied to HSI data for the task of land mine detection. Experiments using real hyperspectral image cubes, not simulations, are performed with data from both the visible-SWIR and LWIR regions. Results are presented for different mine types in a variety of scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a 2-D Direction of arrival algorithm whose resolution is superior to that of the subspace class of algorithms and sidelobes are reduced compared to most algorithms. The algorithm is based on the 2-D AR Power Spectral Density (2-D ARPSD) applied to a uniformly spaced data set (space-time) which transforms the space-time data to spatial frequency (wavenumber, which is a function of the direction of arrival) and temporal frequency in a high resolution context. This is done by modeling the sensor array data with a 2-D AR model. The 2-D AR parameters are then used in a specialized form of a 2-D FFT to create an enhanced wavenumber-frequency image. A wavenumber vector for a specific narrowband temporal frequency is extracted and compared to other high resolution algorithm such as MUSIC. Our results exhibit superior performance in low SNR and short sample sized scenarios and when mismatch occurs in the subspace techniques. Our technique also exhibits reduced sidelobes as compared with traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes our extended and generalized approach to detection of periodic signals in image sequences. These signals appear in a small number of pixels of an image sequence as periodic fluctuations in the temporal domain. Neither the shape of a signal, nor its fundamental frequency is assumed to be known, but the fundamental frequency is assumed to be localized in some narrow range. The frame sequences cover only a few periods of each signal under discussion. We consider groups of these signals relative to our sampling operator, that is defined by its sampling frequency and integration (exposure) time. For each group the appropriate coherent basis is used: Fourier basis or periodized gaussians. Not unusually, under the sampling operator the signals and basic functions loose periodicity, and the bases loose orthogonality. The problems that arise are treated by some version of matching pursuit. Our approach to signal accumulation from adjacent pixels by spectrum-specific version of principal components is generalized by using projection onto more general class of subspaces. Normally, the computationally expensive processing sketched above is performed for less than 1% of pixels only. The remaining 99% are rejected by simple and fast procedures. The algorithm was tested by processing simulated image sequences, as well as several real ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The observation of closely-spaced objects using limited-resolution
Infrared (IR) sensor systems can result in merged object measurements on the focal plane. These Unresolved Closely-Spaced Objects (UCSOs) can significantly hamper the performance of surveillance systems. Algorithms are desired which robustly resolve UCSO signals such that (1) the number of targets, (2) the target locations on the focal plane, (3) the uncertainty in the location estimates, and (4) the target intensity signals are correctly preserved in the resolution process. This paper presents a framework for obtaining UCSO resolution while meeting tracker real-time computing requirements by applying processing algorithms in a hierarchical fashion. Image restoration techniques, which are often quite cheap, will be applied first to help reduce noise and improve resolution of UCSO objects on the focal plane. The CLEAN algorithm, developed to restore images of point targets, is used for illustration. Then, when processor constraints allow, more intensive algorithms are applied to further resolve USCO objects. A novel pixel-cluster decomposition algorithm that uses a particle distribution representative of the pixel-cluster intensities to feed the Expectation Maximization (EM) is used in this work. We will present simulation studies that illustrate the capability of this framework to improve correct object count on the focal plane while meeting the four goals listed above. In the presence of processing time constraints, the hierarchical framework provides an interruptible mechanism which can satisfy real-time run-time constraints while improving tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With high resolution radars, realistic objects should be considered as extended rather than point targets. If several closely-spaced targets fall within the same radar beam and between adjacent matched filter samples in range, the full monopulse information from all of these samples can and should be used for estimation, both of angle and of range (with the range to sub-bin accuracy). To detect and localize multiple unresolved extended targets, we establish a model for monopulse radar returns from extended objects, and use a maximum likelihood estimator to localize the targets. Rissanen's minimum description length (MDL) will be used to decide the number of existing extended objects. We compare the new extended target monopulse processing scheme with previously developed point target monopulse techniques to show the improvement in terms of the estimation of target locations, the detection of the number of existing targets, and the tracking performance with a multiple hypothesis tracker (MHT) using the output from the proposed extended target monopulse processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Track features help reduce the number of false associations during tracking. It is expected that the spectral signature vector from a target would tend to be consistent over short periods of time. Therefore, the spectral signature vector direction is a potential track feature candidate. The target's spectral signature and covariance are measured from the data at the output of the spatial-spectral match filter. The spectral vector feature is not independent from the local signal to clutter plus noise ratio (S(C+N)R), at the output of the anomaly detector which is often also used as a track feature. A correction term is introduced in this paper to account for the correlation between these two features. Results from field data collections using a multi-spectral Infra-Red Search and Track System (IRST) are summarized with ROC curves showing the performance improvements achieved by using the unit spectral vector as a feature for both moving and stationary targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a flexible hardware system of the Doppler radar which is designed to verify various baseband array signal processing algorithms. In this work we design the Doppler radar system simulator for baseband signal processing in laboratory level. Based on this baseband signal processor, a PN-code pulse doppler radar simulator is developed. More specifically, this simulator consists of an echo signal generation part and a signal processing part. For the echo signal generation part, we use active array structure with 4 elements, and adopt baker coded PCM signal in transmission and reception for digital pulse compression. In the signal processing part, we first transform RF radar pulse to the baseband signal because we use the basebands algorithms using IF sampling. Various digital beamforming algorithms can be adopted as a baseband algorithm in our simulator. We mainly use Multiple Sidelobe Canceller (MSC) with main array antenna elements and auxiliary antenna elements as beamforming and sidelobe canceller algorithm. For Doppler filtering algorithms, we use the FFT. A control set is necessary to control overall system and to manage the timing schedule for the operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Out-of-sequence measurement (OOSM) filtering algorithms have drawn a great deal of attention during the last few years. A number of multiple-lag OOSM filtering algorithms exists in the research literature. Only one of the OOSM filtering algorithms is optimal and remaining algorithms are suboptimal even for the linear dynamics and linear measurement models with additive Gaussian noises. A general feature of each OOSM filtering algorithm is that the algorithm calculates optimally or sub-optimally, the smoothed or retrodicted state estimate, associated covariance, and cross-covariance between the state and the measurement at the OOSM time. The existing optimal OOSM algorithm calculates these three quantities using a forward recursive algorithm. In this paper, we show that the OOSM filtering problem can be solved optimally using a generalized smoothing or retrodiction framework for the linear dynamics and linear measurement models with additive Gaussian noises. We develop a new optimal smoothing based OOSM filtering algorithm which uses the Rauch-Tung-Streibel (RTS) fixed-interval optimal backward smoother. We present numerical results using simulated data which includes two-dimensional position and velocity measurements and analyze the performance of the algorithm using Monte Carlo simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a multiple-model (MM) hypothesis testing approach for detection of unknown target maneuvers that may have several possible prior distributions. An MMmaneuver detector based on sequential hypothesis testing is developed. Simulation results that compare the performance of the proposed MM detector to that of traditional maneuver detectors are presented. They demonstrate that the new sequential MM detector outperforms traditional multiple hypothesis testing based detectors when the prior acceleration distributions are unknown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We apply a new industrial strength numerical approximation, called the "mesh-free adjoint method", to solve the nonlinear filtering problem. This algorithm exploits the smoothness of the problem, unlike particle filters, and hence we expect that mesh-free adjoints are superior to particle filters for many practical applications. The nonlinear filter problem is equivalent to solving the Fokker-Planck equation in real time. The key idea is to use a good adaptive non-uniform quantization of state space to approximate the solution of the Fokker-Planck equation. In particular, the adjoint method computes the location of the nodes in state space to minimize errors in the final answer. This use of an adjoint is analogous to optimal control algorithms, but it is more interesting. The adjoint method is also analogous to importance sampling in particle filters, but it is better for four reasons: (1) it exploits the smoothness of the problem; (2) it explicitly minimizes the errors in the relevant functional; (3) it explicitly models the dynamics in state space; and (4) it can be used to compute a corrected value for the desired functional using the residuals. We will attempt to make this paper accessible to normal engineers who do not have PDEs for breakfast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of tracking multiple maneuvering targets is considered. The usual multiple model approach is adopted in which maneuvering target motion is modeled by assuming that the target motion at each point in time can be described by one of a finite set of dynamic models. Transitions between each mode of target motion are assumed to be Markovian. Target positions are measured in polar coordinates leading to a nonlinear measurement equation. A particle filter is proposed as a solution to the problem. The proposed algorithm seeks to improve upon the performance of a previously proposed particle filter by using measurement-directed proposals and exploiting the structure of the measurement likelihood. The performance analysis focuses on targets which perform coordinated turn maneuvers. An improved model for target motion in this regime is suggested. The performance analysis, using Monte Carlo simulations, demonstrates the improved performance of the proposed algorithm compared to the previously proposed particle filter and the standard Gaussian approximation, the IMM-JPDAF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced tracking algorithms such as multiple frame assignment (MFA) and multiple hypothesis tracking (MHT) require the formation of a "frame of data" to input measurements into the tracking system. A "frame" is a collection of measurements in which a target should appear at most once. For some sensor types, the frame definition is straightforward: all measurements in "one scan" of the antenna across the surveillance area compose a frame of data. However, for electronically scanned array (ESA) radar, the beam pointing is agile and the radar may point the beam in a sequence of overlapping positions. If the data from the sequence of dwells are merged into one frame, duplicate measurements may result from targets in the overlap regions. But restricting each frame to be one dwell has negative consequences because it causes an incomplete representation of closely-spaced targets within each frame. This paper presents a new algorithm for the formation of frames of data for ESA radar systems. The algorithm uses a series of gating tests to determine which radar dwells may be merged together. For overlapping beams, a selection technique is developed that minimizes the number of redundant measurements that appear in any given frame. A summary of tracking performance results attained when using the algorithm is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking moving vehicles has received less attention than its its aerial counterpart. The smooth velocity transitions common to aircraft are replaced with abrupt turns and and speed changes. Though the kinematic evolution of a ground vehicle is more complex, the path is more restricted. For example, if target motion is constrained by a terrain map, the topography should be integrated into the tracking algorithm. This paper shows by means of an example that the Gaussian wavelet estimator is particularly suited to map-enhanced estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking algorithms have to operate in an environment of uncertain measurement origin, due to the presence of randomly detected target measurements as well as clutter measurements from unwanted random scatterers. A majority of Bayesian multi-target tracking algorithms suffer from computational complexity which is exponential in the number of tracks and the number of shared measurements. The Linear Multi-target (LM) tracking procedure is a Bayesian multi-target tracking approximation with complexity which is linear in the number of tracks and the number of shared measurements. It also has a much simpler structure than the "optimal" Bayesian multi-target tracking, with apparently negligible decrease in performance. A vast majority of target tracking algorithms have been developed with the assumption of infinite sensor resolution, where a measurement can have only one source. This assumption is not valid for real sensors, such as radars. This paper presents a multi-target tracking algorithm which removes this restriction. The procedure utilizes a simple structure of LM tracking procedure to obtain a LM Finite Resolution (LMfr) tracking procedure which is much simpler than the previously published efforts. Instead of calculating the probability of measurement merging for each combination of potentially merging targets, we evaluate only one merging hypotheses for each measurement and each track. A simulation study is presented which compares LMfr-IPDA with LM-IPDA and IPDA target tracking in a cluttered environment utilizing a finite resolution sensor with five crossing targets. The study concentrates on the false track discrimination performance and the track retention capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Factors affecting the performance of an algorithm for tracking multiple targets observed using a pixelized sensor are studied. A pixelized sensor divides the surveillance region into a grid of cells with targets generating returns on the grid according to some known probabilistic model. In previous work an efficient particle filtering algorithm was developed for multiple target tracking using such a sensor. This algorithm is the focus of the study. The performance of the algorithm is affected by several considerations. The pixelized sensor model can be used with either thresholded or non-thresholded measurements. While it is known that information is lost when measurements are thresholded, quantitative results have not been established. The development of a tractable algorithm requires that closely-spaced targets are processed jointly while targets which are far apart are processed separately. Selection of the clustering distance involves a trade-off between performance and computational expense. A final issue concerns the computation of the proposal density used in the particle filter. Variations in a certain parameter enable a trade-off between performance and computational expense. The various issues are studied using a mixture of theoretical results and Monte Carlo simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an effort to improve the probability of correctly associating tracks and observations, features, which are physical properties of the target other than kinematics, are included in the association process. Unlike their kinematic counter parts, the probability distributions of the features are typically not known for all of the objects involved. This precludes the use of the parametric hypothesis tests typically used with the kinematic data to perform association. One possible solution to this problem is to assume a probability distribution for each of the features and use it in the conventional parametric test used for association. The risk is that the wrong probability distributions will be assumed and the association error probability will increase. An alternative approach is to use the feature data in a non-parametric test, a type of test that requires little or no knowledge of the probability distribution of the data. The result of the non-parametric test of the feature data is then combined with the result of the conventional parametric test of the kinematic data. As the title suggests, this paper compares the performance of these two approaches for several sets of conditions. First, since the parametric test of the kinematic data assumes the data to be Gaussian distributed, the features are drawn initially from Gaussian populations. These Gaussian distributed features are used to test both approaches and their performance curves are compared and analyzed. This process is then repeated for three non-Gaussian feature distributions. Two of these distributions belong to the same exponential family of distributions as the Gaussian Distribution however, both have heavier tails and one is a one-sided distribution. The third feature distribution used in the comparison has finite support and the experiment is designed so that perfect performance is possible. Realizing that the association error probability is not zero, the foregoing evaluations are repeated with misassociations present in the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution addresses the problem of tracking multiple moving objects simultaneously over time given measurements with false alarms and missing detections. Such a task becomes particularly intricate if the initial target number and targets states are unknown and if the individual targets are not separable. We employ a sequential Bayesian framework based on the Finite Set Statistic approach in order to estimate the number of targets and the target states simultaneously. The iterative .ltering equations are solved numerically using Particle Filter techniques. We describe a method for sequential tack extraction of multiple targets without relying on external information and present results for small groups of ground moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, some recent results on the modified Riccati equation are studied. This modified Riccati equation has already been associated with tracking a target under measurement uncertainty. We consider the special case of tracking a target without clutter, but with a probability of detection of less than one. This special case
has received quite some attention recently, especially in relationship with Cramer Rao bounds, or equivalently expected performance. Furthermore, in some other recent works, new theoretical results on the modified Riccati equation have been derived. We will compare these results and point out their importance for performance
assessment and prediction in a target tracking context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data association is one of the main components of target tracking. While, in its simplest form, data association links a list of tracks to a list of measurements or links two lists of measurements (2-D association), the more complex problem involves assignment of multiple number of such lists (S-D association where S ≥ 3). In target tracking, the presence of false detections (false alarms) and the absence of detections from some targets (missed detections) complicate the problem of data association further. In this work, we explore the possibility of applying track ordering in priority queues to solve the association problem more efficiently. The basic component of our algorithm is to form priority queues by permutations of the tracks. Each queue is served on a first-come-first-served basis, i.e., each track is assigned to the best measurement available based on its turn in the queue. It can be shown that the best solution to the 2-D problem can be obtained from one of these queues. However, the solution is computationally expensive even for a moderate number of targets. In this paper we show that due to redundancy only a small fraction of the total number of permutations is required to be evaluated to obtain the best solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It can be shown that sufficient statistics maintained by a multiple-hypothesis algorithm for a multiple-target tracking problem are expressed as a set of a posteriori Janossy measures. This fact suggests a new class of multiple target tracking algorithms without generating and evaluating data-to-data association hypotheses. Under a certain set of assumptions, including target-wise independent detection and measurement errors, a Janossy measure representation can be considered to be the symmetrization or scrambling of a posteriori joint target state distribution under a specific data association hypothesis. This paper explores the possibility of using Janossy measures directly to represent a posteriori joint target state distributions, instead of the product expression of the a posteriori joint target state distributions. By doing so, it becomes possible to combine any two data association hypotheses sharing the same number of detected targets they assume, thereby providing an effective method for combining data association hypotheses. When using Janossy measures as a method of representation, we are not forced to maintain the product structure, and hence, at least theoretically, we can combine two data association hypotheses without any approximation. Since we do not require track-wise independence under any hypothesis, this method allows us to treat split or merged measurements in a unified way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is common practice to represent a target group (or an extended target) as set of point sources and attempt to formulate a tracking filter by constructing possible assignments between measurements and the sources. We suggest an alternative approach that produces a measurement model (likelihood) in terms of the spatial density of measurements over the sensor observation region. In particular, the measurements are modelled as a Poisson process with a spatially dependent rate parameter. This representation allows us to model extended targets as an intensity distribution rather than a set of points and, for a target formation, it gives the option of modelling part of the group as a spatial distribution of target density. Furthermore, as a direct consequence of the Poisson model, the measurement likelihood may be evaluated without constructing explicit association hypotheses. This considerably simplifies the filter and gives a substantial computational saving in a particle filter implementation. The Poisson target-measurement model will be described and its relationship to other filters will be discussed. Illustrative simulation examples will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine new data structures and algorithms for efficient and accurate gating and identification of potential track/observation associations. Specifically, we focus on the problem of continuous timed data, where observations arrive over a range of time and each observation may have a unique time stamp. For example, the data may be a continuous stream of observations or consist of many small observed subregions. This contrasts with previous work in accelerating this task, which largely assumes that observations can be treated as arriving in batches at discrete time steps. We show that it is possible to adapt established techniques to this modified task and introduce a novel data structure for tractably dealing with very large sets of tracks. Empirically we show that these data structures provide a significant benefit in both decreased computational cost and increased accuracy when contrasted with treating the observations as if they occurred at discrete time steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many factors make the ground target tracking problem decidedly nonlinear and non-Gaussian. Because these factors can lead to a multimodal posterior density, a Bayesian filtering solution
is appropriate. In the last decade, the particle filter has emerged as a Bayesian inference technique that is both powerful and simple to implement. In this work, we demonstrate the necessity of using multiple-target particle filters when two or more tracks are linked through measurement contention. We also develop an efficient way to implement these filters by adaptively managing the type of particle
filters, the number of particles, and the enumeration of hypotheses during data association. Using simulated data, we compare the run-time of our adaptive particle filter algorithm to the run-times of two baseline particle filters, to demonstrate that our design mitigates the increase in computation required when performing joint
multitarget tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multiple Hypotheses Tracking (MHT) algorithm has been shown to have the best tracking performance among existing multi-target tracking algorithms using real world sensors with probability of detection less than unity and in the presence of false alarms. The improved performance of the Multiple Hypotheses Tracking comes at the cost of signicantly higher computational complexity. Most Multiple Hypotheses Tracking implementations only form the best global hypothesis. This paper compares the Linear Multitarget Integrated Track Splitting (LMITS) tracking algorithm with the Multiple Hypotheses Tracking algorithm. LMITS has a simpler structure than Multiple Hypotheses Tracking as it decouples local hypotheses and avoids the measurement to multi-track allocation entirely. The number of LMITS hypotheses equals the sum of the number of local hypotheses added to the number of initiation hypotheses. Thus LMITS can retain a deeper hypotheses subtree which can result in better performance. We compare tracking performances of LMITS and MHT algorithms using simulated data for multiple maneuvering targets in heavy and non-uniform clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resource Management and Multiple Sensor Processing
We present a market-like method for distributed communications resource management in the context of networked tracking and surveillance systems. This method divides communication resources according to the expected utility provided by information of particular types. By formulating the problem as an optimization of the joint utility of information flow rates, the dual of the problem can be understood to provide a price for particular routes. Distributed rate control can be accomplished using primal-dual iteration in combination with communication of these route prices. We extend the previous work on the subject in a few important ways. First, we consider utility functions that are jointly-dependent on flow rates, to properly account for geometric synergy that can occur in sensor fusion problems. Second, we do not require that the rate-update algorithms have explicit knowledge of utility functions. Instead, our update algorithms involve transmitting marginal utility values. We present simulation results to demonstrate the effectiveness of the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sonar tracking using measurements from multistatic sensors has shown promise: there are benefits in terms of robustness, complementarity (covariance-ellipse intersection) and of course simply due to the increased probability of detection that naturally accrues from a well-designed data fusion system. It is not always clear what the placement of the sources and receivers that gives the best fused measurement covariance for any target--or at least for any target that is of interest--might be. In this paper, we investigate the problem as one of global optimization, in which the objective is to maximize the information provided to the tracker. We assume that the number of sensors is known, so that the optimization is done in a continuous space. We consider di.erent scenarios and numbers of sensors. The strong variability of target strength as a function of aspect is integral to the cost function we optimize. Numerical results are given, these suggesting that certain sensor geometries should be used. We have a number of intuitive suggestions that do not involve optimization for sensor layout.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we consider the general problem of dynamic assignment of sensors to local fusion centers (LFCs) in a distributed tracking framework. As a result of recent technological advances, a large number of sensors can be deployed and used for tracking purposes. However, only a certain of number of sensors can be used by each local fusion center due to physical limitations. In addition, the number of available frequency channels is also limited. We can expect that the transmission power of the future sensors will be software controllable within certain lower and upper limits. Thus, the frequency reusability and the sensor reachability can be improved. Then, the problem is to select the sensor subsets that should be used by each LFC and to find their transmission frequencies and powers, in order to maximize the tracking accuracies as well as to minimize the total power consumption. This is an NP-hard multi-objective mixed-integer optimization problem. In the literature, sensors are clustered based on target or geographic location, and then sensor subsets are selected from those clusters. However, if the total number of LFCs is fixed and the total number of targets varies or a sensor can detect multiple targets, target based clustering is not desirable. Similarly, if targets occupy a small part of the surveillance region, location based clustering is also not optimal. In addition, the frequency channel limitation and the advantage of the variable transmitting power are not discussed well in the literature. In this paper, we give the mathematical formulation of the above problem. Then, we present an algorithm to find a near optimal solution to the above problem in real time. Simulation results illustrating the performance of the sensor array manager are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid growth and increasing sophistication of airborne surveillance technology have spurred intense research efforts in the development and implementation of tracking algorithms capable of processing a large number of targets using multi-sensor data. In this paper, a novel tracking algorithm, the NOGA tracker, is presented and compared with the more conventional Time-Weighted Backvalues Least Squares (TWBLS) estimator for accuracy (numerical and phenomenological), ease of implementation, and time performance. The NOGA tracker combines model predictions and sensor measurements to produce best estimates for quantities of interest. The state estimator model used for NOGA is a simple second order auto-regression which is combined with an uncertainty reduction scheme involving nonlinear Lagrange optimization process in which the inverse of a global covariance matrix is used as the natural metric for the Bayesian inference that underlies the combining process. The NOGA tracker explicitly incorporates sensor and model uncertainties in the estimation process, and uses model sensitivities to propagate the associated covariance matrices accurately and in a systematic way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One technique that has been applied to the tracking of maneuvering targets is the neural extended Kalman filter. The technique adapts the motion model used by the Kalman filter tracker. This adaptation of the model is performed by a neural network that trains on-line using the same residuals as the track states. The behavior of this technique with multiple sensors providing different measurement types and different update rates has not previously been discussed; previous works have always employed a single sensor, usually providing a position measurement or a range-bearing measurement. In actual applications, multiple sensors are typically employed. These sensors often provide measurements at different rates or with the different accuracy. Such issues can have a detrimental effect on the performance of the neural network. The results of multiple-sensor data with variations in the update rates and measurement accuracy to an NEKF estimation system are analyzed. The analysis is based upon the case of two non-collocated sensors providing range-bearing measurements at varying rates applied to the tracking of an actual aircraft flight trajectory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, multisensor-multitarget tracking performance with bias estimation and compensation is investigated when only moving targets of opportunity are available. First, we discuss the tracking performance improvement with bias estimation and compensation for synchronous biased sensors, and then a novel bias estimation method is proposed for asynchronous sensors with time-varying biases. The performance analysis and simulations show that asynchronous sensors have a slightly degraded performance compared to the "equivalent" synchronous ones. The bias estimates as well as the corresponding Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimates, i.e., the quantification of the available information on the sensor biases in any scenario are also given. Tracking performance evaluations with different sources of biases --- offset biases, scale biases and sensor location uncertainties, are also presented and we show that tracking performance is significantly improved with bias estimation and compensation compared with the target tracking using the original biased measurements. The performance is also close to the lower bound obtained in the absence of biases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the problem of estimating sensor biases (e.g.,
range and bearing biases) from measurements of targets with
deterministic dynamics but uncertain initial conditions is
considered. The known dynamics are exploited by a single sensor to
self-calibrate or determine unknown sensor biases. The concept of
bias state tracklet fusion from tracks of multiple trajectories is
discussed. The effectiveness of this concept is demonstrated, and
the performance sensitivity to geometry variations and the number
of available targets is examined. For comparison, the bias state
tracklet estimator is compared to a nonlinear least squares (NLS)
estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor allocation and threat analysis are difficult fusion problem that can sometimes be approximately solved using simulations of the
future movement of adversary units. In addition to requiring detailed motion models, such simulation also requires large amounts of computational resources, since a large number of possibilities
must be examined. In this paper, we extend our previously introduced framework for doing such simulations more efficiently. The framework is based on defining equivalence classes of future paths of a set of units. In the simplest case, two paths are considered equivalent if they give rise to the same set of observations. For sensor management, each considered sensor plan thus entails an equivalence relation on the set of future paths. This can be used to significantly reduce the number of "alternative futures" that need to be considered for the simulation. For threat analysis, the equivalence relation can instead be based on the perceived threat against own units. We describe how the equivalence classes induced by such relations could be used to improve the visualization of threat analysis systems. User interaction can also be used to refine the equivalence classes; we argue that such interaction will be essential for international operations where is it difficult to define actors and targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we assess the capability of underwater hydrophone (UH) arrays to locate and track manoeuvring targets. A UH array is a horizontal line array of omnidirectional pressure sensors that is deployed on the seabed. The measurements at each UH array are then affected by two idiosyncrasies, termed "source direction ambiguity" and "coning error". In this paper, the posterior Cramer-Rao lower bound(PCRLB) is used as the measure of system performance, providing a bound on the optimal achievable accuracy of target state estimation. We demonstrate the impact of the measurement idiosyncrasies on the PCRLB, with the bound shown to be greater (poorer performance) than when using standard bearings-only sensors. We also include clutter (i.e. we allow each measurement to be either target generated or a false positive), as well as both state-dependent measurement errors and a state-dependent probability of detection. Building on previous work, we show that the measurement origin uncertainty can again be expressed as an information reduction factor (IRF), with this IRF now shown to be a function of both the target range and orientation in relation to each UH array. We consider simulated scenarios that contain features characteristic of recent sea trials conducted by QinetiQ Ltd. The two key features of the trial scenarios is that we have very sparse prior knowledge, and each target has the potential to perform a series of manoeuvres. We use a recent PCRLB formulation for tracking manoeuvring targets that approximates the potentially multi-modal target distribution using a best-fitting Gaussian distribution. We present simulation results for multi-sensor scenarios, demonstrating that this is indeed a difficult tracking problem. Tracking is particularly difficult when the target crosses the line of the UH arrays, making triangulation difficult; and when the target is in the "end-fire" of at least one UH array. It is also difficult to detect and triangulate distant targets. Future work will investigate the tightness of the PCRLB when compared with the performance of state-of-the-art tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper formulates a benchmark data association problem in a missile defense surveillance problem. The specific problem considered deals with set of sources that provide "event" (track) estimates via a number of communication networks to a Fusion Center (FC) which has to perform data association prior to fusion. A particular feature of the network model is that the information to distinguish among reports from the same source transmitted through different networks is not available at the FC: the track identity (ID) assigned by the
source is not passed on, but only a track ID assigned by the network, and the source ID accompany the track. This makes it necessary to detect and eliminate track duplications at the FC among the messages with the same source ID but different network ID. The resulting data, organized into sensor lists, is associated using a likelihood
based cost function with one of the several existing multidimensional assignment (MDA) methods. A comparison of the following two association criteria: Mahalanobis distance ("chi-square") and likelihood ratio (LR) is carried out. It is shown that the LR yields significantly superior results. The tracks obtained after association are fused using a Maximum Likelihood approach. An additional complication is that false reports can be also transmitted by the sources. Examples with several launches, sources and networks are presented to illustrate the proposed solution and compare the performances of two assignment algorithms - the Lagrangean relaxation based S-D and the sequential m-best 2-D - on this realistic problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a new network centric architecture for multiple frame assignment (MFA) tracking. The architecture improves on earlier network tracking schemes by allowing trackers to broadcast decisions about their local soft-level associations, via the Soft Associated Measurement Reports (SAMRs). The SAMR may be followed by an "Oops" message, if the soft association was incorrect and must be revoked. We show, however, that such revocations are very rare in most scenarios. This paper discusses the implementation of the new algorithm and presents simulation results. Considerable improvements in the consistency of the air picture are demonstrated, owing to the the reduced latency in transmission of measurement-to-track associations. The earlier network architectures, namely, the Centralized MFA, the Replicated Centralized MFA, and the Network MFA on Local and All Data, are also discussed in this work, as they form the foundation for the "Oops" algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance systems typically perform target identification by
fusing target ID declarations supplied by individual sensors with
a prior knowledge-base. Target ID declarations are usually uncertain in the sense that: (1) their associated confidence factor is less than unity; (2) they are non-specific (the true hypothesis belongs to a subset A of the universe Θ). Prior knowledge is typically represented by a set of possibly uncertain implication rules. An example of such a rule is: if the target is Boeing 737 than it is neutral or friendly with probability 0.8. The uncertainty again manifests itself here in two ways: the rule holds only with a certain probability (typically less than 1.0) and the rule is non-specific (neutral or friendly). The paper describes how the fusion of ID declarations and the implication rules can be handled elegantly within the framework of the belief function theory as understood by the transferable belief model (TBM). Two illustrative examples are
worked out in details in order to clarify the theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An obvious use for feature and attribute data is for classification and in combat identification. The term classification is used broadly here to include discrimination, detection, target typing, identification, and pattern recognition. An additional use is in the data (or track) association process to reduce the misassociations, often called feature aided tracking. Previous papers discussed the integration of features and attributes into target track processing in addition to their use in multiple target classification. The distinction is made between feature and attribute data because they are processed differently. The term features applies to random variables from continuous sample space and attributes applies to random variable data from discrete sample space.
The primary concern of this paper is to address processing of attributes with data from legacy sensors. For example, in fusion of sensor data from distributed sensors, a sensor might distribute the likelihood of the most likely target class (or attribute property) and no information on the other possibilities. In another example, a sensor might distribute the likelihood that the target is any one of a number of target classes, a subset of all the target classes, rather than indicating one specific target class. Both attribute aided tracking and (post tracker) target classification are addressed for these examples of incomplete data from legacy sensors. The purpose of this paper is to show the feasibility of simple approaches to dealing with data from some legacy sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of track-to-track association - a prerequisite for fusion of tracks - has been considered in the literature only for tracks described by kinematic states. The association of tracks from a common target can also be solved using additional feature or attribute variables which are associated with those tracks. We extend the existing results to the situation where track association is done using feature variables, which are continuous valued, as well as target classification information or attributes, which are discrete valued. The sufficient statistic for the optimal association test (in the Neyman-Pearson sense) based on discrete-valued target classification information observables (attributes) is derived and its relationship with the class probability vector is discussed. Based on this, "attribute gates" are presented, which play a similar role to the kinematic gates in track-to-track association.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proper fusion processing of track data would become complicated if cross-correlation of the estimation errors of a local track and a fusion track for the same target exists. If so, then designing a filter that combines track data should take that cross-correlation into account. A number of suitable tracklet methods exist. This paper compares trackers using a tracklets-from-tracks approach to trackers using a tracklets-from-measurements approach, and each of these approaches are compared to three other global trackers, by root-mean-square error and normalized Mahalanobis distance metrics. Simulation results are presented showing that fusion trackers using non-synchronized tracklets appear to perform better than fusion trackers using synchronized tracklets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 1995, Streit and Luginbuhl introduced a new tracking algorithm1, which offered a balance between the single-frame approach of the Probabilistic Data Association Filter (PDAF) and the multiple frame approach of the Multiple Hypothesis Tracker (MHT). With single-frame tracking algorithms, only information that has been received to date is used to determine the association between tracks and measurements. These decisions are made based on available data and are not changed even when future data may indicate that a decision was incorrect. On the other hand, in multi-frame algorithms, hard decisions are delayed until some time in the future, thus allowing the possibility that incorrect association decisions may be corrected with more data. This paper presents the initial results of some new research using the PMHT algorithm as a composite tracker on distributed platforms. In addition, the methods necessary to implement the PMHT in a realistic simulation are discussed. It further describes the techniques that have been tried to ensure a single integrated air picture (SIAP) across the platforms. The PMHT uses both past and present data without enumerating most of the possibility measurement-to-track assignments. Instead the PMHT uses probabilistic weightings via Gaussian mixtures to define the relationship between measurements and tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory and practice of single-sensor, single-target tracking is well understood when measurement uncertainties are due entirely to randomness. In this case the Bayes filter and its special cases and approximations, such as the Kalman filter, constitute the foundations of tracking. However, the measurement uncertainties in many observation-types (features, natural-language reports, etc.) can arise from ignorance as well as randomness. Approaches such as the Dempster-Shafer (DS) theory claim to address such information but continue to be controversial, especially in tracking. In other papers this year we have shown that this can be attributed to a lack of formal physical modeling techniques. If familiar measurement models are extended in a natural way, measurement fusion using DS combination can be subsumed within the Bayesian theory. In this paper we apply these results to introduce the "evidential filter," a special case of the Bayes filter applicable whenever measurement uncertainty can be modeled in DS form. We derive closed-form formulas for the evidential filter; and show that data-update using Dempster's combination is a special case of Bayes' rule. We also briefly show how to incorporate the evidential filter into multitarget tracking techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensional interpolation has been used successfully by physicists and chemists to solve the Schroedinger equation for atoms and complex molecules. The same basic idea can be used to solve the Fokker-Planck equation for nonlinear filters. In particular, it is well known (by physicists) that two Schroedinger equations are equivalent to two Fokker-Planck equations. Moreover, we can avoid the Schroedinger equation altogether and use dimensional interpolation directly on the Fokker-Planck equation. Dimensional interpolation sounds like a crazy idea, but it works. We will attempt to make this paper accessible to normal engineers who do not have quantum mechanics for breakfast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance advantage of the multiple frame data association
methods over the single frame methods follows from the ability to
hold difficult decisions in abeyance until more information is
available and the opportunity to change past decisions to improve
current decisions. In dense tracking environments the performance
improvements of multiple frame methods over single frame methods is
very significant, making it the preferred solution for difficult
tracking problems. The price that one pays for this performance
gain is the computational complexity (NP-hard) of the resulting data
association problem. The number of strings of data or arcs that one
forms for N frames of data each containing M reports is MN wherein we have omitted missed reports. This number grows exponentially with the length of the window (i.e., N). Thus, much preprocessing is required to manage memory usage and to achieve good runtime performance. The control of the computations makes use of a variety of techniques including a) bin gating, b) coarse pair and triple point dynamic gating, c) multi-frame gating, d) medium gating based on filter prediction gates, e) fine gating based on likelihood ratios, f) track hypothesis pruning, h) problem partitioning, and i) cluster tracking. While this work comments on many of these methods, the goal is to derive methods bin, coarse pair, and multi-frame gating in a simple setting as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is widely accepted that feature data will be necessary in order to aid the data association for ground target tracking systems that are required to maintain continuous tracks on important targets. It is also recognized that target behavior as determined by a tracking system can aid target type identification. Thus, noting that the tracking and ID functions are complementary, a joint tracking and identification architecture has been developed. These methods are being tested using simulated dynamic ground targets and radar High Range Resolution (HRR) data provided by the Moving and Stationary Target Acquisition and Recognition (MSTAR) project. The paper begins by giving an overview of the IMM/MHT tracker that has been designed to handle the unique characteristics (such as on-off road behavior) of the ground target tracking problem. Then, a joint tracking identification methodology is described. Implementing this approach, target behavior (such as being part of a group, speed, and on/off road motion) can be used both in the data association and for target type information. A Dempster-Shafer method is used for combining all classification-related data. The track score, incorporated in all MHT data association decisions, is augmented with a feature-related term derived from the conflict term computed from an application of Dempster's Rule. Finally, the paper illustrates the proposed methods using results from a detailed simulation of target convoys that perform on and off road maneuvers. Results using MSTAR HRR data are presented for both Signature-Aided (SAT) and Classification-Aided (CAT) approaches to feature-aided tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses several theoretical issues related to the score function for the measurement-to-track association/assignment decision in the track oriented version of the Multiple Hypothesis Tracker (MHT). This score function is the likelihood ratio: the ratio of the pdf of a measurement having originated from a track, to the pdf of this measurement having a different origin. The likelihood ratio score is derived rigorously starting from the fully Bayesian (hypothesis oriented) MHT, which is shown to be amenable under some (reasonable) assumptions to the track oriented MHT. The latter can be implemented efficiently using multidimensional assignment. The main feature of a likelihood ratio is the fact that it is a (physically) dimensionless quantity and, consequently, can be used for the association of different numbers of measurements and/or measurements of different dimension. The explicit forms of the likelihood ratio are discussed both for the commonly used Kalman tracking filter, as well as for the Interacting Multiple Model estimator. The issues of measurements of different dimension and different coordinate systems are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an efficient adaptive parameter control scheme for Multi Function Radar (MFR) is used. This scheme has been introduced in.5 The scheme has been designed in such a way that it meets constraints on specific quantities that are relevant for target tracking while minimizing the energy spent. It is shown here, that
this optimal scheme leads to a considerable variation of the realized detection probability, even within a single scenario. We also show that constraining or fixing the probability of detection to a certain predefined value leads to a considerable increase in the energy spent on the target. This holds even when one optimizes the fixed probability of detection. The bottom line message is that the detection probability is not a design parameter by itself, but merely the product of an optimal schedule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most approaches to data association in target tracking use a likelihood-ratio based score for measurement-to-track and track-to-track matching. The classical approach uses a likelihood ratio based on kinematic data. Feature-aided tracking uses non-kinematic data to produce an "auxiliary score" that augments the kinematic score. This paper develops a nonkinematic likelihood ratio score based on statistical models for the signal-to-noise (SNR) and radar cross section (RCS) for use in narrowband radar tracking. The formulation requires an estimate of the target mean RCS, and a key challenge is the tracking of the mean RCS through significant "jumps" due to aspect dependencies. A novel multiple model approach is used track through the RCS jumps. Three solution are developed: one based on an α-filter, a second based on the median filter, and the third based on an IMM filter with a median pre-filter. Simulation results are presented that show the effectiveness of the multiple model approach for tracking through RCS transitions due to aspect-angle changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, hyperspectral image analysis has proved successful for a target detection problem encountered in remote sensing as well as near sensing utilizing in situ instrumentation. The conventional global bi-level thresholding for target detection, such as the clustering-based Otsu's method, has been inadequate for the detection of biologically harmful material on foods that has a large degree of variability in size, location, color, shape, texture, and occurrence time. This paper presents multistep-like thresholding based on kernel density estimation for the real-time detection of harmful contaminants on a food product presented in multispectral images. We are particularly concerned with the detection of fecal contaminants on poultry carcasses in real-time. In the past, we identified 2 optimal wavelength bands and developed a real-time multispectral imaging system using a common aperture camera and a globally optimized thresholding method from a ratio of the optimal bands. This work extends our previous study by introducing a new decision rule to detect fecal contaminants on a single bird level. The underlying idea is to search for statistical separability along the two directions defined by the global optimal threshold vector and its orthogonal vector. Experimental results with real birds and fecal samples in different amounts are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel approach for detecting unknown target maneuver
using range rate information is proposed based on the generalized
Page's test with the estimated target acceleration magnitude. Due to
the high nonlinearity between the range rate measurement and the
target state, a measurement conversion technique is used to treat
range rate as a linear measurement in Cartesian coordinates so that
a standard Kalman filter can be applied. The detection performance
of the proposed algorithm is compared with that of existing maneuver
detectors over various target maneuver motions. In addition, a model
switching tracker based on the proposed maneuver detector is
compared with the state-of-the-art IMM estimator. The results
indicate the effectiveness of the maneuver detection scheme which
simplifies the tracker design. The tracking performance is also
evaluated using a steady state analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Track initiation in dense clutter can result in severe algorithm runtime performance degradation, particularly when using advanced tracking algorithms such as the Multiple-Frame Assignment (MFA) tracker. This is due to the exponential growth in the number of initiation hypotheses to be considered as the initiation window length increases. However, longer track initiation windows produce significantly improved track association. In balancing the need for robust track initiation with real-world runtime constraints, several possible approaches might be considered. This paper discusses basic single and multiple-sensor infrared clutter rejection techniques, and then goes on to discuss integration of those techniques with a full measurement preprocessing stage suitable for use with pixel cluster
decomposition and group tracking frameworks. Clutter rejection processing inherently overlaps the track initiation function; in both cases, candidate measurement sequences (arcs) are developed that then undergo some form of batch estimation. In considering clutter rejection at the same time as pixel processing, we note that uncertainty exists in the validity of the measurement (whether or not the measurement is of a clutter point or a true target), in the measurement state (position and intensity), and in the degree of resolution (whether a measurement represents one underlying object, or multiple). An integrated clutter rejection and pixel processing subsystem must take into account all of these processes in generating an accurate sequence of measurement frames, while minimizing the amount of unrejected clutter. We present a mechanism for combining clutter rejection with focal plane processing, and provide simulation results showing the impact of clutter processing on the runtime and tracking performance of a typical space-based infrared tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a recursive track-before-detect (TBD) algorithm based on the Probability Hypothesis Density (PHD) filter for multitarget tracking. TBD algorithms are better suited over standard target tracking methods for tracking dim targets in heavy clutter and noise. Classical target tracking, where the measurements are pre-processed at each time step before passing them to the tracking filter results in information loss, which is very damaging if the target signal-to-noise ratio is low. However, in TBD the tracking filter operates directly on the raw measurements at the expense of added computational burden. The development of a recursive TBD algorithm reduces the computational burden over conventional TBD methods, namely, Hough transform, dynamic programming, etc. The TBD is a hard nonlinear non-Gaussian problem even for single target scenarios. Recent advances in Sequential Monte Carlo (SMC) based nonlinear filtering make multitarget TBD feasible. However, the current implementations use a modeling setup to accommodate the varying number of targets where a multiple model SMC based TBD approach is used to solve the problem conditioned on the model, i.e., number of targets. The PHD filter, which propagates only the first-order statistical moment (or the PHD) of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems with varying number of targets. We propose a PHD filter based TBD so that there is no assumption to be made on the number of targets. Simulation results are presented to show the effectiveness of the proposed filter in tracking multiple weak targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection and estimation of multiple unresolved targets with a monopulse radar is a challenging problem. For ideal single bin processing, it was shown in the literature that at most two unresolved targets can be extracted from the complex matched filter output signal. In this paper, a new algorithm is developed to jointly detect and track more than two targets from a single detected bin. This method involves the use of tracking data in detection. For this purpose, target states are transformed into detection parameter space, which involves high nonlinearity. In order to handle this, the sequential Monte Carlo (SMC) method, which is proved to be effective for nonlinear non-Gaussian estimation problems, is used as the basis of the closed loop system for tracking multiple unresolved targets. In addition to the standard SMC steps, the detection parameters corresponding to the predicted particles are evaluated using the nonlinear monopulse radar beam model. It in turn enables the evaluation of the likelihood of the monopulse signal given tracking data. That is, we evaluate the likelihoods of different hypotheses of possible combinations of targets being in different detected bins. The hypothesis testing is used to find the correct detection event. The particles are updated and resampled according to the hypothesis that has the highest likelihood (score). A simulated amplitude comparison monopulse radar is used to generate the data with more than unresolved two targets. Simulation results confirm the possible extraction and tracking of more than two targets jointly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In underwater tracking, such as with an unmanned undersea vehicle (UUV) or torpedo it is advantageous to track a target covertly. The Maximum Likelihood-Probabilistic Data Association (ML-PDA) tracking algorithm, which has been demonstrated to establish and maintain track in low SNR/high clutter environments, is used to develop track information in a covert tracking application. By combining intermittent sensor data (active sonar) from the UUV with that of the launch platform (passive sonar) in the ML-PDA track algorithm, fewer active transmissions are required to establish and maintain a given track accuracy thereby reducing the chance of target alertment. We show that this is a viable operating model and demonstrate how sensor placement affects track accuracy, including determination of best sensor placement and requirements on active transmissions to maintain minimum tracking accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the recent advent of moderate-cost unmanned (or uninhabited) aerial vehicles (UAV) and their success in surveillance, it is natural to consider the cooperative management of groups of UAVs. The problem considered in this paper is the optimization of the information obtained by a group of UAVs carrying out surveillance -- search and tracking -- over a large region which includes a number of targets. The goal is to track detected targets as well as search for the undetected ones. The UAVs are assumed to be equipped with Ground Moving Target Indicator (GMTI) radars, which measure the locations of moving ground targets as well as their radial velocities (Doppler). In this paper, a decentralized cooperative control algorithm is proposed, according to which the UAVs exchange current scan and detection information and each UAV decides its path separately based on an information based objective function that incorporates target state information as well as target detection probability and survival probability for sensors corresponding to hostile fire by targets and collision with other UAVs. The proposed algorithm requires limited communication and modest computation and it can handle failure in communication and loss of UAVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work deals with the following question: using passive (line-of-sight angle) observations of a missile from an aircraft, how can one infer that the missile is or is not aimed at the aircraft. The observations are assumed to be made only on the initial portion (about 1/4) of the missile's trajectory. The approach is to model the trajectory of the missile with a number of kinematic and guidance parameters, estimate them and use statistical tools to infer whether the missile is guided toward the aircraft. A mathematical model is constructed for a missile under pure proportional navigation with a changing velocity (direction change and speed change), to intercept a nonmaneuvering aircraft. A maximum likelihood estimator is presented for estimating the missile's motion parameters and a goodness-of-fit test is formulated to test if the aircraft is the aim or not. Using measurement data from a realistic missile aimed at an aircraft shows that the proposed method can solve this problem successfully. The estimation/decision algorithm presented here can also be used for an aircraft to decide whether appropriate countermeasures are necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image processing architecture has been developed that executes a concatenated algorithm to determine the presence of multiple fiducial marks on an image plane, locates the estimated position of the fiducial marks in the image, calculates subpixel accuracy of the fiducial mark, and translates the x and y coordinates of the fiducial marks to absolute distance and phase relationships between fiducial marks. The fiducial mark is an object with an outer circular boundary and two inner lines that intersect to provide an object with symmetry. This symmetry is crucial for the requirements of rotation and scaling invariant, specifically for the process of identifying the presence of the fiducial mark in the image plane. Video is used for imagery of the fiducial marks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed multiview calibration system for an effective 3D display. This system can obtain 4-view image from multiview camera system. Also it can be rectify lens and camera distortion, an error of bright and color, and it can be calibrate distortion of geometry. We proposed the signal processing skill to calibrate the camera distortions which are able to take place from the acquired multiview images. The discordance of the brightness and the colors are calibrated the color transform by extracting the feature point, correspondence point. In addition, the difference of brightness is calibrated by using the differential map of brightness from each camera image. A spherical lens distortion is corrected by extracting the pattern of the multiview camera images. Finally the camera error and size among the multiview cameras is calibrated by removing the distortion. Accordingly, this proposed rectification & calibration system enable to effective 3D display and acquire natural multiview 3D image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ultimate goal of this paper is to track two closely spaced and unresolved targets using monopulse radar measurements, the quality of such tracking being a determinant of successful detection of target spawn. It explores statistical estimation techniques based on the maximum likelihood criterion and Gibbs sampling, and addresses concerns about the accuracy of the measurements delivered thereby. In particular, the Gibbs approach can deliver joint measurements (and the associated covariances) from both targets, and it is therefore natural to consider a joint filter. The ideas are compared; and amongst the various strategies discussed, a particle filter that operates directly on the monopulse measurements is especially promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new nonlinear diffusion filtering scheme based on a nonlinear diffusion equation with a variable scale parameter is developed to preserve faint point sources while smoothing images for segmentation purposes. Application of the proposed approach to simulated, as well as to real images obtained by the Spitzer Space Telescope and by the Chandra X-ray Observatory reduced the Gaussian and Poisson noise successfully, while preserving both point sources and diffuse structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.