PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In the first part it is shown the closely spaced small objects, by non- traditional Super-Scanning Locator (SSL), can be resolved with resolution better that by the Rayleigh criterion. Furthermore, one can create the real images (tomograms) of extended objects. Short description of theoretical and experimental research to develop a SSL with antennas performing beam scanning during both the emission and reception of pulses are presented. The reflected signals are received within discrete visibility layers (Vls) formed due to beam scanning during pulses radiation and reception. SSL has a number of advantages in comparison with the conventional locator. The known distribution of Vls in space allows one: to create adaptive systems ensuring reception of the required information wiht the minimum energy expenses; to attain super-resolution of objects (at distances smaller, than required by the Rayleigh criterion). An ultrasonic version of the SSL, working in air, at frequency range by bats and dolphin usage, is described. The experimental results totally confirm the theoretical predictions. For instance, 3 closely spaced small objects (with distances 3 times less than Rayleigh criterion), were resolved. Real contour image of a model of a air place, was received. The ultrasonic version of the SSL, working inside ocean or sea, for creation the autonomous high-resolution underwater vehicles or diver's information system, can be used. In second part the results of theoretical and experimental research to develop the methods and means for detection, observing and 3D images of static and dynamic objects disposed undersea, or inside or behind optically opaque solid media, are presented. The methods for undersea detection and measurements are based on optical refraction, diffraction, and the Talbot effects on the water surface disturbed by ultrasound waves reflected or passed by the object. The possibility of wind direction and velocity measurements, near the ocean surface, by Talbot effect usage are mentioned. The methods for measurement objects, disposed inside or behind optically non-transparent media, on the usage of microwave holographic static or movie camera are based.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an automatic algorithm for the removal of echoes that are caused due to anomalous propagation(AP) from the lower radar elevation. The algorithm uses textural information as well as intensity characteristics of reflectivity maps that are obtained from the two lower radar elevations. The texture of the reflectivity maps is analyzed with the help of multifractals. We present examples that illustrate the efficiency of our algorithm. We compare our algorithm with a manual algorithm that was developed by NASA/TRMM for AP removal, in terms of total rain accumulation and in terms of the number of pixels removed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective missile warning and countermeasures are an unfulfilled goal for the Air Force and DOD community. To make the expectations a reality, sensors exhibiting the required sensitivity, field of regard, and spatial resolution are needed. The largest concern is the first stage of a missile warning system, detection, in which all targets need to be detected with a high confidence and with very few false alarms. Typical sensors are limited in their detection capability by the presence of heavy background clutter, sun glints, and inherent sensor noise. Many threat environments include false alarm generators like burning fuels, flares, exploding ordinance, and industrial sources. Multicolor discrimination is one of the most effective ways of improving the performance of infrared missile warning sensors, particularly for heavy clutter situations. Its utility has been demonstrated in fielded scanning sensors. Utilization of the background and clutter spectral content, coupled with additional spatial and temporal filtering techniques have resulted in a robust real-time algorithm to increase signal-to-clutter ratios against point targets. Algorithm results against tactical data are summarized and compared in terms of computational cost as implemented on a real-time 1024 SIMD machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection in remotely sensed images can be conducted spatially, spectrally or both. The difficulty of detecting targets in remotely sensed images with spatial image analysis arises from the fact that the ground sampling distance is generally larger than the size of the targets of interest in which case targets are embedded in a single pixel and cannot be detected spatially. Under this circumstance target detection must be carried out at subpixel level and spectral analysis offers a valuable alternative. This paper compares two constrained approaches for subpixel detection of targets in remote sensing images. One is a target abundance-constrained approach, referred to as the nonnegatively constrained least squares (NCLS) method. It is a constrained least squares linear spectral mixture analysis method which implements a nonnegatively constraint on the abundance fractions of targets of interest. A common drawback of linear spectral mixture analysis based methods is the requirement for prior knowledge of the endmembers present in an image scene. In order to mitigate this drawback, the NCLS method is extended to create an unsupervised approach, referred to as the unsupervised nonnegatively constrained least present in the image scene. The second approach is a target signature-constrained method, called the constrained energy minimization (CEM) method. It constrains the desired target signature with a specific gain while minimizing effects caused by other unknown signatures. Data from the HYperspectral Digital Imagery Collection Experiment (HYDICE) sensor are used to compare the performance of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we use an adaptive AR (Auto regressive) model to optimally filter background texture from images. The filter maps background texture into minimum variance locally white noise. Ad additive point target signal, however, is unaffected by the filter thereby effectively maximizing the signal to noise/clutter ratio. Thus, filter output thresholding can detect anomalous pixels in the images. Additionally the paper introduces a false alarm rejection scheme based on the intersection of a Four Quadrant AR (Quad-AR) filter. The paper addresses the implicit background assumptions in this approach and the median filter approach to small target detection. An example application of the filter to infrared images of missiles immersed in intense sea glint is presented. The AR filter performance is compared to a median filter performance. It is shown that for the infrared sub-pixel missile over sea problem, the Quad-AR approach is substantially better than previous approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several methods reported in the literature for detecting dim targets against slowly moving clutter. However, each method has its own advantages and disadvantages. The challenge lies in reducing the false alarm rate to an acceptable level. Choosing a threshold for achieving constant false alarm rate is always a tricky problem. Too less a threshold may ensure detection of target pixels. But this will result in with too many false targets, which limit the performance of the post- processor to trace out target paths. Too high threshold results in fewer false alarms but the targets also may miss out, creating a problem in establishing track record of targets. These contradicting issues demand a via-media solution to improve the overall concept of CFAR for the detection of dim point-targets in the presence of the evolving clouds and heavy background clutter. The adaptive threshold is based on random and correlated noises of the incoming image sequence. The incoming frames of data are processed by adaptive threshold and accumulated recursively. The post-processor with built-in flexibility checks for validity of target paths. This paper presents an improvement over our paper presented at SPIE, Denver duing July 1999. The algorithm has been tested with the available database and the results are very promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clutter rejection is often an essential task in applications involving the detection and identification of small targets, making the choice of a clutter rejection algorithm extremely important if such a system is to perform as desired. Many different clutter rejection algorithms have been developed by various groups seeking to address this problem; however, as the performances of the algorithms are often very scenario dependent, selecting an appropriate algorithm for a given application usually requires thorough testing and performance analysis. This paper describes the methodology and results of a study done on clutter rejection algorithms for a system involving a staring IR camera mounted on an airborne platform. The purpose of this system is to detect the use of ordnance on a battlefield and then determine what type of ordnance was used. The clutter rejection algorithm needed to be real-time and possible to implement in hardware. The algorithms chosen for testing included 17 spatial filters and 4 temporal filters, along with two different types of thresholding (spatially fixed and spatially adaptive). Appropriate datasets for testing were created using a combination of real ordnance data taken by the IR camera, and clutter backgrounds from MODIS Airborne Simulator. Several different metrics were chosen to assist in algorithm performance evaluation. The final algorithm selection was based both on computational complexity and algorithm performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In IRST applications, cluttered backgrounds are typically much more intensive than the equivalent sensor noise and intensity of the targets to be detected. This necessitates the development of efficient clutter rejection technology for track initialization and reliable target detection. Experimental study shows that the best existing spatial filtering techniques allow for clutter suppression up to 10 dB, while the desired level (for the reliable detection/tracking) is 25-30 dB or higher. This level of clutter suppression can be achieved only by implementing spatial-temporal rather than spatial filtering. In addition, the clutter rejection algorithm should be supplemented by a jitter compensation technique. Otherwise, due to the blurring effect, temporal filtering cannot be applied effectively. This paper discusses a novel adaptive spatial-temporal technique for clutter rejection. The algorithm is developed on the basis of the application of robust and adaptive methods that are invariant to the prior uncertainty with respect to statistical properties of clutter and adaptive with respect to its variability. The developed clutter rejection technique is based on a multi-parametric approximation of clutter which, after estimation of parameters, leads to an adaptive spatial-temporal filter. The coefficients of the filter are calculated adaptively to guarantee a minimum of empirical mean-square values of the filtering residual noise for every time moment. The adaptive spatial-temporal filtering allows one to suppress any background, regardless of its spatial variation. Simultaneously, the algorithm estimates LOS drift and allows for jitter compensation. Results of simulation show that the algorithm gives a tremendous gain compared to the best existing spatial techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectral signature of a target is typically unknown apriori because of its dependence upon environmental conditions (e.g., sun angle, atmospheric attenuation and scattering), factors effecting the reflectivity and emissivity of the target's surface (dirt, dust, water, paint, etc) and recent operating history (hot or cold engine, exhaust parts, wheels or tracks, etc.). Because of the high variability of the spectral signature of a target, multispectral detection typically detects spectral anomalies. For example, the canopy of a helicopter hovering in front of tree clutter may glint in the midwave infrared band while the reststrahlen spectral feature of the fuselage paint occurs in the longwave infrared band. Both of these are spectral anomalies relative to the tree clutter. If the target is slightly extended so that it subtends more than one pixel, the spectral anomalies by which the target may be detected will not be spatially collocated. This effectively lowers the ROC (receiver operating characteristic) curve of the detection process. This paper derives the ROC curves for several alternative solutions to this problem. One solution considers all possible spectral n-tuples within a small region. One of these n-tuples would likely contain all of the spectral anomalies of the target. Another solution is to apply a spatial maximum operator to each spectral band prior to the anomaly detector. This also combines all the spectral anomalies form the target into a single n-tuple. These methods have the potential to increase PD but an increase in PFA will also occur. The ROC curves of these solutions to the problem of detecting slightly extended targets are derived and compared to establish relative levels of performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous conference, we described a powerful class of temporal filters with excellent signal to clutter gains in evolving cloud scenes of consecutive IR sequences. The generic temporal filter is a zero-mean damped sinusoid, implemented recursively. The full algorithm, a triple temporal filter (TTF), consists of a sequence of two zero-mean damped sinusoids followed by an exponential averaging filter. The outputs of the first two filters are weakened at strong local edges. Analysis of a real-world database led to two optimized filters: one dedicated to noise-dominated scenes, the other to cloud clutter-dominated scenes; a dual-channel fusion of the two filters has also been implemented in hardware. This paper describes the post-processing and thresholding of the outputs of the filter algorithms. Post-processing on each output frame is implemented by a simple spatial algorithm which searches for maximum linear or pseudo-linear streaks made up of three linked pixels. The output histogram after post-processing is more robust to histogram- based thresholding and in some cases has improved signal to clutter ratio. The threshold is based on a simple level-occupancy (binary) histogram in which the first gap of 4 empty levels is determined and a threshold established based on this gap value and the number of occupied levels in the histogram above the gap. The post-processing and thresholding of the filter outputs are now operating in real-time hardware. Preliminary flight tests on a small aircraft of the algorithms in real-time operation demonstrate the viability of the approach on a moving platform. Specific examples and a video of the real-time performance on a fixed and moving platform will be presented at the conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Missile warning is one of the most significant problems facing aircraft flying into regions of unrest around the world. Recent advances in technology provide new avenues for detecting these threats and have permitted the use of imaging detectors and multi-color systems. Detecting threats while maintaining a low false alarm rate is the most demanding challenge facing these systems. Using data from ARFL's Spectral Infrared Detection System (SIRDS) test bed, the efficacy of alternative spectral threat detection algorithms developed around these technologies are evaluated and compared. The data used to evaluate the algorithms cover a range of clutter conditions including urban, industrial, maritime and rural. Background image data were corrected for non-uniformity and filtered to enhance threat to clutter response. The corrected data were further processed and analyzed statistically to determine probability of detection thresholds and the corresponding probability of false alarm. The results are summarized for three algorithms including simple threshold detection, background normalized analysis, and an inter-band correlation detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There exist a number of powerful methods for detecting small low observable targets with stationary dynamics in image sequences provided by IR and other imaging sensors (see e.g.12). However, these methods need to be extended to handle maneuvering targets. In this paper, we demonstrate that banks of interacting Bayesian filters (BIBF) can be utilized for this purpose. We are considering target dynamics modeled by jump-linear systems. In contrast to previous studies, we do not assume that the mode jump process is a Markov chain. In particular, we allow the probabilities of jumps to be conditioned on the state variable. Then, we present a computationally efficient (real time) algorithm for detection and tracking of low observable agile targets. A comparison of BIBY and IMM approaches is carried out in a simple example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns itself with the target-free behavior of data association in the clutter. There are a pair of main results. In the first, a new technique is developed for the detection of track loss. The basis is Page's (cusum) test, but a modification is given for the situation that the track-absent parameter (e.g. of mean of squared innovations) is not known. The result is a Page-style test with variable bias and threshold. The second matter of interest if the target-free distribution of the maximum-likelihood probabilistic data association (MLPDA) statistic. It is confirmed via importance sampling that a Gaussian distribution is not inappropriate; but the mean and variance of this distribution are different from those of the un-maximized log- likelihood surface. Guidance is given on the calculation of these.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Northrop Grumman Track Grouper provides an effective solution to the track grouping problem because it successfully forms track aggregates that decrease display clutter and enhance the battle management picture. The Track Grouper is a formation group tracking algorithm that makes track-to-track comparisons to create and update group membership. The benefits of this approach are conservation of computer resources and preservation of individual target positions. The Track Grouper reduces the adverse effects of measurement mis-correlations and false alarms because it uses estimated tracks instead of noisy sensor measurements. The Track Grouper uses a series of kinematic gates and three primary sub-functions (assign, split, and merge) to determine group membership and maintain group ID, history, and ancestry. The Track Grouper successfully overcomes the challenges of group tracking and provides a key battle management tool that enhances Moving Target Exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive sonar signal processing generally includes tracking of narrowband and/or broadband signature components observed on a Lofargram or on a Bearing-Time-Record (BTR) display. Fielded line tracking approaches to date have been recursive and single-hypthesis-oriented Kalman- or alpha-beta filters, with no mechanism for considering tracking alternatives beyond the most recent scan of measurements. While adaptivity is often built into the filter to handle changing track dynamics, these approaches are still extensions of single target tracking solutions to multiple target tracking environment. This paper describes an application of multiple-hypothesis, multiple target tracking technology to the sonar line tracking problem. A Multiple Hypothesis Line Tracker (MHLT) is developed which retains the recursive minimum-mean-square-error tracking behavior of a Kalman Filter in a maximum-a-posteriori delayed-decision multiple hypothesis context. Multiple line track filter states are developed and maintained using the interacting multiple model (IMM) state representation. Further, the data association and assignment problem is enhanced by considering line attribute information (line bandwidth and SNR) in addition to beam/bearing and frequency fit. MHLT results on real sonar data are presented to demonstrate the benefits of the multiple hypothesis approach. The utility of the system in cluttered environments and particularly in crossing line situations is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is a continuation of two previous papers presented at past Signal and Data Processing of Small Target conferences held in 1997 and 1999. The 1997 paper , title Combined Kalman Filter and JVC Algorithms for AEW Target Tracking Applications, described AEW advanced tracking algorithm development and provided performance results for straight line targets. In the 1999 paper, Maneuver Tracking Algorithms for AEW Target Tracking Applications, modifications to the tracking and association algorithms necessary to track the remaining 100 maneuvering targets of the 120 target scenario were presented. The maneuvering targets include zig-zag, wave, ovals and racetrack trajectories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An autonomous field of sensor nodes needs to acquire and track targets of interest traversing the field. Small detection ranges limit the detectability of the field. As detections occur in the field, detections are transmitted acoustically to a master node. Both detection processing and acoustic communication drain a node's battery. In order to maximize field life, an approach must be developed to control detector thresholds and acoustic communication routing. To address these problems, an adaptive threshold control scheme has been developed. This technique minimizes the power consumption while still maintaining the field level probability of detection. Acoustic communication routing of the field is also performed to minimize power consumption and therefore, extend the life of the field. The control law developed is based on an evolutionary programming approach. Evolutionary programming is a stochastic optimization algorithm used to solve N-P hard problems. Results are provided which demonstrate the ability to maintain a constant field level probability of detection while extending the life of the sensor field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is the first part of a series of papers that provide a comprehensive and up-to-date survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys the various mathematical models of target dynamics proposed for maneuvering target tracking, including 2D and 3D maneuver models as well as coordinate-uncoupled generic models for target dynamics. This survey emphasizes the underlying ideas and assumptions of the models. Interrelationships among the models surveyed and insight to the pros and cons of the models are provided. Some material presented here has not appeared elsewhere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the design of a Variable Structure Interacting Multiple Model (VS-IMM) estimator for tracking evasive ground targets using Moving Target Indicator reports obtained from an airborne sensor. In order to avoid detection by the MTI sensor, the targets use a move- stop-move strategy, where a target deliberately stops or moves at a very low speed for some time before accelerating again. In this case, when the target's radial velocity (along the line of sight from the sensor) falls below a certain Minimum Detectable Velocity, the target is not detected by the sensor. Under these conditions, the use of an estimator, which does not take care of this move-stop-move motion explicitly, can result in broken tracks. The tracker proposed in this paper handles the evasive move-stop-move motion via the VS-IMM estimator, where the tracker mode set is augmented with a stopped-target model when the estimated speed of the target falls below a certain threshold. Using this additional stopped-target model, the target state is kept alive even in the absence of a measurement. A simulated scenario is used to illustrate the selection of design parameters and the operation of the tracker. Performance measures are presented to contrast the benefits of the VS-IMM estimator, which uses the stopped-target model, over a standard IMM estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Ground Moving Target Indicator (GMTI) is developed using a Variable Structure Interacting Multiple Model Filter (VS-IMM). Current trackers use road network database information to either condition GMTI measurements, thereby altering the detection report, or constrain tracks formed on measurements near roads to the road network, thus making a hard decision about the location of the target. The VS-IMM allows for soft decisions about which road a target is possibly located on. The VS- IMM filter developed adaptively adds and deletes road models based upon history of measurement data or extrapolated tracks. As measurements are associated with existing tracks, a search of possible road segment models is performed to either add or delete road segment models as required. As targets on roads approach junctions, additional potential road segment models are added, as the targets pass the intersection only the most likely model is retained. When a target beginst o move into a road segment that is obscured, a model is added that modified the filter estimate and likelihood according to a hidden target model. The VS-IMM Filter includes two models for characterizing temporal vehicle movement. A model with low process noise is used to model targets that are traveling straight and a model with high process noise is used to model highly maneuvering targets. The state estimates from the possible road models are combined with the state estimate of the on-road and off-road multiple temporal models to produce the composite state estimate for the VS-IMM Filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An extended Kalman filter was developed by Maybeck and Mercier [1] to track unresolved or point targets whose spatial signature in the point spread function of the sensor. An extension of this filter which takes into account the target shape and its variations with aspect angle will be developed and should offer improvement in several areas: performance against structured clutter; Maybeck considered only a white noise background; the errors and computations associated with segmentation are eliminated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a nonlinear filter for ground target tracking. Hospitability for maneuver derived from terrain, road and vehicle dynamics constraints is incorporated directly into the filter's motion model. The conditional probability density for the target state is maintained and updated with sensor measurements as soon as they become available. The conditional density is time updated between sensor measurements using finite difference methods. In simulations using square-law detected measurements the filter is able to track maneuvering ground targets when the Signal to Interference + Noise Ratio (SINR) os between 6 and 9 dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle approximations are used to track a maneuvering signal given only a noisy, corrupted sequence of observations, as are encountered in target tracking and surveillance. The signal exhibits nonlinearities that preclude the optimal use of a Kalman filter. It obeys a stochastic differential equation (SDE) in a seven-dimensional state space, one dimension of which is a discrete maneuver type. The maneuver type switches as a Markov chain and each maneuver identifies a unique SDE for the propagation of the remaining six state parameters. Observations are constructed at discrete time intervals by projecting a polygon corresponding to the target state onto two dimensions and incorporating the noise. A new branching particle filter is introduced and compared with two existing particle filters. The filters simulate a large number of independent particles, each of which moves with the stochastic law of the target. Particles are weighted, redistributed, or branched, depending on the method of filtering, based on their accordance with the current observation from the sequence. Each filter provides an approximated probability distribution of the target state given all back observations. All three particle filters converge to the exact conditional distribution as the number of particles goes to infinity, but differ in how well they perform with a finite number of particles. Using the exactly known ground truth, the root-mean-squared (RMS) errors in target position of the estimated distributions from the three filters are compared. The relative tracking power of the filters is quantified for this target at varying sizes, particle counts, and levels of observation noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tracking performance of the Particle Filter is compared with that of the Range-Parameterised EKF (RPEKF) and Modified Polar coordinate EKF (MPEKF) for a single-sensor angle-only tracking problem with ownship maneuver. The Particle Filter is based on representing the required density of the state vector as a set of random samples with associated weights. This filter is implemented for recursive estimation, and works by propagating the set of samples, and then updating the associated weights according to the new received measurement. The RPEKF, which is essentially a weighted sum of multiple EKF outputs, and the MPEKF are known for their robust angle-only tracking performance. This comparative study shows that the Particle Filter performance is the best, although the RPEKF is only marginally worse. The superior performance of the Particle Filter is particularly evident for high noise conditions where the EKF type trackers generally diverge. Also, the Particle Filter and the RPEKF are found to be robust to the level of a priori knowledge of initial target range. On the contrary, the MPEKF exhibits degraded performance for poor initialisation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new method for multiple model filtering. This method is a combination of the IMM filter approach and the hybrid particle filter approach. The merging part of this IMM hybrid particle algorithm is able to deal with the model switching behavior, whereas the filtering part can deal with nonlinearities in the dynamics and measurements and possible non Gaussian noise within a certain mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the book Mathematics of Data Fusion as well as in earlier papers and the recent monograph An Introduction to Multisource-Multitarget Statistics and Its Applications, I have proposed a unified Bayesian approach to multisource-multitarget fusion, detection, tracking, and identification based on a multitarget generalization of the Bayesian recursive nonlinear filtering equations. The foundation for this approach is finite-set statistics, a systematic generalization of standard engineer's statistical calculus to the multisensor-multitarget realm. I showed, in particular, how true multitarget likelihood functions can be constructed using this calculus. In this paper I show that true multitarget Markov motion models can be constructed in much the same manner. If such an approach is to ever become practical, however, computationally tractable approximate filters will have to be devised. This paper elaborates an approach suggested in Mathematics of Data Fusion: para-Gaussian densities-i.e., multitarget analogs of Gaussian distributions that result in closed-form integrations when inserted into the multitarget Bayesian recursive nonlinear equations. I propose an approximate multitarget filter based on the para-Gaussian concept that shows some promise of leading to computational tractibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to provide an accurate view of a region of interest is standard among existing tracking systems. The extension of this capability to include accurate projections of the targets to future times increases the complexity of the problem. However, this ability to predict future locations is an important problem due to the inherent time latency that is present from sensor to shooter. Because of this, a major requirement of the tracking system is that it must be able to use available information to accurately pr edict future target locations. The focus of this paper is to describe an improved state estimation technique that incorporates road information by using a Variable Structure IMM. Furthermore, this approach will identify and account for stopped or stationary moving targets. Finally, some preliminary performance results will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple Sensors improve tracking performance. The Kinematic Automatic Tracker (KAT) may drop a track when the Joint STARS aircraft is turning or the track is screened. In situation like this multiple sensors can improve tracking performance. Two algorithms for tracking moving targets using multiple sensors are developed. Both algorithms are based on modifying the existing KAT architecture. The first approach is based on the use of the best sensor's Moving Target Indicators (MTIs) when the sensors have overlapping dwells. The second and more general approach uses the report-to-track fusion algorithm. The first approach is frame- based (with multiple dwells), whereas the second approach is dwell- based. The tracking results are presented for the first algorithm for a two-sensors Korean scenario with increasing complexity. We also show the advantages of using multiple sensors for tracking using a simulated Korea scenario. For the report-to-track fusion algorithm, we present preliminary tracking results for a simulated scenario using two UAV sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The batch Maximum Likelihood Estimator, combined with Probabilistic Data (ML-PDA), has been shown to be effective in acquiring low observable (LO) - low SNR - non-maneuvering targets in the presence of heavy clutter. The use of signal strength or amplitude information (AI) in the ML-PDA estimator with AI in a sliding-window fashion, to detect high- speed targets in heavy clutter using electro-optical (EO) sensors. The initial time and the length of the sliding-window are adjusted adaptively according to the information content of the received measurements. A track validation scheme via hypothesis testing is developed to confirm the estimated track, that is, the presence of a target, in each window. The sliding-window ML-PDA approach, together with track validation, enables early detection by rejecting noninformative scans, target reacquisition in case of temporary target disappearance and the handling of targets with speeds evolving over time. The proposed algorithm is shown to detect the target, which is hidden in as many as 600 false alarms per scan, 10 frames earlier than the Multiple Hypothesis Tracking (MHT) algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The topic studied here involves target tracking of aircraft using naval radar, and can be also used in civilian applications such as airport traffic management. The aim is to initiate a 3-D track from lower- dimensional 2-D radar contact data for 3-D track initiation and/or promotion. Because it is meant to be used at long ranges, the main assumption is that the target is performing rectilinear motion at a given altitude. The solution to this problem will facilitate track management, as all tracks will eventually be three-dimensional. The two cascaded algorithms presented here consist in first determining speed and altitude independently of the actual trajectory, then determining the actual trajectory given the best-fit value for the altitude. The algorithms are shown to work perfectly for noiseless data and adequately enough for typical naval radar parameters. The added sensory components will help resolve the association problem in multiple target scenarios by providing altitude information hidden from the sensors and revealed only through this mathematical modeling and its related algorithmic processing. In addition, the fact that one can deduce speed and altitude at the early stages of tracking permits the elimination of many platform identifications at the outset of the Multi-Sensor Data Fusion process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion tracking using data from multiple, distributed sensors will only be successful if the bias associated with each platform can be established and removed before data fusion is attempted. In many cases, the sensors cannot be reliably calibrated in advance and it is necessary to rely upon targets of opportunity. Bias estimation must then become part of an integrated data fusion system. This paper demonstrates the feasibility of such system, using TOTS to provide the multi-model, multi-sensor tracking capability, with additional functionality to support the bias estimation and correction based on whatever common targets are observed. Such a prototype system is shown to be effective, and is valuable in highlighing the main issues that must be addressed before a full system can be fielded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a method for the estimation of the bias errors of active and passive sensors used in connection with multisensor multitarget tracking. It is based on comparing the measurements of targets of opportunity, and does not require reference targets or reference sensors. The method handles bias errors that vary with time, and is suitable for on-line processing. The most essential ingredients of the method are: Including of a priori values and uncertainties; minimization of the appropriate function; linearization around nominal points; introduction of process noise; and quasi-recursive processing. Bias estimation is often difficult because of the limited observability of sensor biases, in the sense that there amy not be a unique set of biases that explains the relative errors between measurements. This problem is discussed, and it is illustrated by simulations how the presented method avoids these problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinearities in coordinate transformation equations introduce bias that, unless corrected, can affect the statistical fidelity of parameter estimates. Several correction methods have been studied, differing both in their algebraic form (additive versus multiplicative) and in their underlying statistical basis (fixed-truth versus fixed-measurement scenario assumptions). This paper, extending previous work of the authors and others, compares alternative approaches for mitigating the bias induced in the transformation of the target position measurements from sensor range-azimuth-elevation angle coordinates to Cartesian x-y-z coordinates. Comparisons are made initially for a static tracking environment involving coordinate transformations at a single measurement time. The comparisons are then extended to a time-sustained tracking period in which sequential measurements are passed through a Kalman filter to produce a track estimate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we compare the performances of centralized and distributed tracking architectures using a set of fighter aircraft scenarios. The tracking accuracy at platform (local) and global levels is evaluated fro track segments with uniform motion and different maneuvering scenarios. We evaluate the effects of target acceleration level, target separations, measurement accuracy, sensor revisit intervals and false alarm rates on the tracking performance at both local and global level. Kalman filter (KF) and Interacting Multiple Model (IMM) estimators with different target kinematic models are compared in terms of root mean square (RMS) position error, RMS velocity error and track purity. The computational requirements of different estimators are also compared. The centralized solution with perfect data association is used as a performance of benchmark for comparison. Scenarios considered include target maneuvers up to 3.5g and use measurements from up to 4 sensors on different platforms. Based on simulation results, appropriate estimator/data association options are recommended for different scenario configurations. An important conclusion is that, with the advent of the IMM estimator, the KF is obsolete for problems of this type. Also, the distributed estimator performs 10% worse than the centralized one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decentralized systems merit a detailed analysis in view of the potential advantages that they offer. These include significant improvements in fault tolerance, modularity and scalability. Such attributes are required by a number of systems that are currently being planned within the defence and civil aerosense sectors. A recognized difficulty with the decentralized network architecture is the potential it creates for redundant data to proliferate as a result of cyclic information flows. This can lead to estimation biases and divergence. Solutions which require the network information sources to be tagged in some way are not generally possible without relaxing some of the constraints on which the decentralized paradigm is founded. This paper consequently investigates a different approach. Specifically, it examines the application of the Covariance Intersection (CI) data fusion technique. CI is relevant to the redundant data problem because it guarantees consistent estimates without requiring correlations to be maintained. The estimation performance of CI is compared here, with respect to a restricted Kalman approach, for a dynamic multi-platform network example. It is concluded that a hybrid CI/Kalman approach offers the best solution, since it exploits known independent information and unknown correlated information without having to relax the decentralized constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In air combat, information advantage over the opponent is vital for the success of the operation. For that reason, modern fighter aircraft have extensive sensor suites to track other objects. In order to form a unified picture of the vicinity; all sensor information is fused. Since system modularity and high computational performance are key issues in the application, a decentralized tracking approach, where the information from the decentralized trackers is fused in a central node, is preferable. Furthermore, in order to improve the sensor tracking performance, it is often desired to feed back information to the sensors from the central node. In this paper, track-to-track association in such a decentralized tracking system with feedback is addressed. The central fusion node has to associate the sensor tracks to each other to be able to fuse them. In a system without feedback, the track-to-track association algorithm bases its conclusions on the assumption that the estimation errors of the tracks from different local trackers are not correlated. However, when information is fed back to the local trackers, this assumption is not valid, since the sensor tracks then consist of common information. System configurations that deal with this problem are proposed and tested in a fighter aircraft application. One approach is to extract the uncorrelated information from the sensor data and use that in the association process. Another approach is to keep parallel trackers in the sensors that only contain the local sensor information. Both approaches produce sensor tracks that contain the same information as the sensor tracks in a system without feedback. Also, a track-to-track association algorithm that recursively uses information from multiple time steps is proposed. The use of multiple time step data separates it from conventional track-to-track association algorithms that mostly use only current information. The result is an algorithm that improves the performance and gives a more stable solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach is taken to address the various aspects of the multiple- target tracking (MTT) problem in dense and noisy environments. Instead of fixing the trackers on the potential targets as the convention tracking algorithms do, this new approach is fundamentally different in that an array of parallel-distributed trackers is laid in the search space. The difficult data-track association problem that has challenged the conventional trackers becomes a nonissue with this new approach. By partitioning the search space into cells, this new approach, called PMAP (probabilistic mapping), dynamically calculates the spatial probability distribution of targets in the search space via Bayesian updates. The distribution is spread at each time step, following a fairly general Markov-chain target motion model, to become the prior probabilities of the next scan. This framework can effectively handle data from multiple sensors and incorporate contextual information, such as terrain and weather, by performing a form of evidential reasoning. Used as a pre- filtering device, the PMAP is shown to remove noiselike false alarms effectively, while keeping the target dropout rate very low. This gives the downstream track linker a much easier job to perform. A related benefit is that with PMAP it is now possible to lower the detection threshold and to enjoy high probability of detection and low probability of false alarm at the same time, thereby improving overall tracking performance. The feasibility of using PMAP to track specific targets in an end-game scenario is also discussed. Both real and simulated data are used to illustrate the PMAP performance. Some related applications based on the PMAP approach, including a spatial-temporal sensor data fusion application, are mentioned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a previous paper, the authors proposed a new general and systematic electronic counter-countermeasure (ECCM) technique called the Decomposition and Fusion (D&F) approach. This ECCM is implemented within the multiple target-tracking framework for protection against range- gate-pull-off (RGPO) and range false target ECM techniques. The original formulation left open the specific multiple target tracking framework. In this paper, we develop a specific implementation of the D&F technique and evaluate it within the Benchmark 2 Problem environment. Simulation results are presented showing the track-loss rejection capabilities and the track accuracy performance of the D&F technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of multiple model estimators can be improved with multisensor fusion. This paper contrasts the performance of three multiple model algorithms. It is shown that the simplest is adequate in high signal-to-noise environments. The more sophisticated warrant attention when the observations are ambiguous.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the issue of measurement-to-track association within the framework of multiple hypothesis tracking (MHT). Specifically, we generate a maximum a posterior (MAP) cost as a function of the number of tracks K. This cost is generated, for each K, as a marginalization over the set of hypothesized track-sets. The proposed algorithm is developed based on a trellis diagram representation of MHT, and a generalized list-Viterbi algorithm for pruning and merging hypotheses. Compared to methods of pruning hypotheses for either MHT or Bayesian multitarget tracking, the resulting Viterbi MHT algorithm is less likely to incorrectly drop tracks in high clutter and high missed- detection scenarios. The proposed number-of-tracks estimation algorithm provides a time-recursive estimate of the number of tracks. It also provides track estimates, allows for the deletion and addition of tracks, and accounts for false alarms and missed detections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because there may be misassociations, soft-decision association, or extra tracks due to false detections or other causes, performance evaluation of multiple target tracking is more complex than evaluating filter performance. This paper presents a methodology and performance metrics to evaluate tracking in the presence of these and other complications. The goal is to evaluate the various aspects of tracking performance that are of concern to operational users. The emphasis of this paper is performance evaluation of trackers, both single-sensor trackers and distributed trackers that combine data from multiple, distributed sensors. Included are the equations for computing metrics to evaluate completeness, timeliness, track continuity, ambiguity, accuracy, and cross-platform commonality. Cross-platform commonality includes metrics to evaluate how consistent the target tracks are across global trackers at different locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In target tracking systems measurements are typically collected in scans or frames and then they are transmitted to a processing center. In multisensor tracking systems that operate in a centralized manner there are usually different time delays in transmitting the scans or frames from the various sensors to the center. This can lead to situations where measurements from the same target arrive out of sequence. Such out-of-sequence measurement (OOSM) arrivals can occur even in the absence of scan/frame communication time delays. The resulting negative- time measurement update problem, which is quite common in real multisensor systems, was solved approximately in [2] by neglecting the process nosie in the backward prediction or retrodiction. In the standard case, the (forward) state prediction can be easily carried out, since the process noise, because of its whiteness, is independent of the current state. However, in retrodiction this independence does not hold anymore. The standard smoothing algorithms cannot be used because the time stamp of the measurement is, in general, arbitrary. The results of [4,3] accounted only partially for the process noise. In view of this, the exact state update equation for such a problem is presented. The three algorithms are compared on a number of realistic examples, including a GMTI (ground moving target indicator) radar case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of track management for a phased array radar is proposed to simplify the data association and improve the allocation of radar resources. The tracks are organized into Association Groups and Dwell Groups. Association Groups are used for course gating when associating new measurement data with existing tracks, while Dwell Groups are used to efficiently schedule the next dwell on closely-spaced targets. By considering only closely-spaced tracks in the same Association Group, a form of coarse gating is inherently done to cull candidate tracks that are unlikely to associate with a measurement from a dwell to illuminate all of its members. A Dwell Group contains tracks that are spaced sufficiently to allow one dwell to illuminate all of its members. Dwell groups lay the foundation for more systematic approaches to optimal allocation of the radar resources. Simulation results are presented to illustrate the new technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel signal processor combined with a tracker/radar resource allocator based on the Interacting Multiple Model Probabilistic Data Association (IMMPDA) estimator is presented for tracking highly maneuvering, closely spaced targets. An advanced monopulse processing technique, which uses the Maximum Likelihood (ML) approach and yields separate angle measurements for two targets in the same radar beam and same range cell, i.e., they are unresolved, is developed. This processing results in a significant improvement, in terms of tracking performance, over techniques using the monopulse ratio for the same problem. The standard monopulse ratio technique of extracting angles yields a single merged measurement when the targets are unresolved, resulting in track coalescence. The signal processor and tracker were coupled with a radar resource allocator to minimize the radar resources required to track the target while maintaining a low track loss and ensuring high estimation accuracies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically, a tracker receives the position coordinates of the threshold exceedances from the detection process. The threshold nonlinearity serves to prevent superfluous data from entering the tracker; it also prevents other information about a detection from being used by the tracker. Track features were developed to provide a shunt for useful information around the detection threshold. Track features such as the measured C(C+N)R of a detection or local measures of clutter severity have been shown to significantly reduce track confirmation times and the probability of confirming a false track. This paper considers the development of track features for multispectral data. The multispectral track features are used in conjunction with available spatial and temporal track features. Ideally, multispectral track features would provide the tracker with information about how target-like the spectral signature of a detection is. Unfortunately, the spectral signature of the target is unknown a priori because of its dependence upon unmeasured environmental variables, uncertainties in factors effecting the emissivity and reflectivity of the target's surface and unknown operating history. This prevents the general development of a multispectral track features that provide target- likeness. The alternative, which is developed in this paper, is to use the consistency of the spectral signatures of the detections that form a track as a track feature. This multispectral track feature helps suppress the formation of tracks from random detections. It also inhibits a true track from branching to a false detection. Finally, it reduces the true track confirmation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate tracking of a ballistic missile from launch to impact requires the use of multiple models to fully exploit the characteristic dynamic differences between the various flight phases. This paper shows that the loosely-coupled, autonomous multiple model strategy used by ASA's TOTS (Target Oriented Tracking System) is very successful in this context, giving accurate tracking and responsiveness to changes in dynamics. For illustration, examples of real ballistic missiles are used, with data provided by several distributed ground-based radars, covering the entire flight trajectory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a performance comparison between a Kalman filter and the Interacting Multiple Model (IMM) estimator is carried out for single- target tracking. In a number of target tracking problems of various sizes, ranging from single-target tracking to tracking of about a thousand aircraft for Air Traffic Control, it has been shown that the IMM estimator performs significantly better than a Kalman filter. In spite of these studies and many others, the condition under which an IMM estimator is desirable over a single model Kalman filter has not been quantified. In this paper the limits of a single model Kalman filter vs. an IMM estimator are quantified in terms of the target maneuvering index, which is a f unction of target motion uncertainty, measurement uncertainty and sensor revisit interval. Naturally, the higher the maneuverability of the target (higher maneuvering index), the more the need for a versatile estimator like the IMM. Using simulation studies, it is shown that above a certain maneuvering index an IMM estimator is preferred over a Kalman filter to track the target motion. Performances of these two estimators are compared in terms of estimation errors and track continuity over the practical range of maneuvering indices. These limits should serve as a guideline in choosing the more versatile, but costlier, IMM estimator over a simpler Kalman filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In target tracking systems that utilize Doppler radar (or sonar), the measurements consist of range, range rate and one or two angles. When multiple sensors are used in the proper geometry, the final accuracy is determined primarily by each sensor's range estimation accuracy. In this case it is particularly important to extract the most accurate state estimate in the range direction by processing optimally the range and range rate measurements. The usual assumption in the design of tracking filters is that the measurement errors in range and range rate are uncorrelated. However, in the case of the commonly used upsweep chirp (linear FM) waveforms, there is a significant negative correlation between the range and range rate measurement noises. The purpose of this note is to quantify the performance improvement one can obtain when this correlation is present and accounted for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of tracking a group of point targets via a sensor with limited resolution and a finite field of view. Measurement association uncertainty and measurement process non-linearity are major difficulties with such cases. It is shown that a Bayesian estimator can be directly implemented using the particle filter technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years the problem of multiple target tracking has bene studied more and more extensively. However, as the foremost difficulties which multiple target tracking (MTT) involves, the problems of gating and track initiation have been ignored to a certain extent. Now rectangular and ellipsoidal gating are used in most MTT systems. This paper describes rectangular and ellipsoidal gating were not very successful in many cases. Then the new track gate, quasi-drip-shaped (QDS) gate, is proposed to eliminate unlikely observation-to-track pairings more effectively and to operate a pre-correlation when more than one returns are within the track gate. The sensor errors and the influence of maneuver are considered synthetically. Based on the more accurate analysis of error data distribution, QDS gate is formed. Simulation results are presented for five approximately equals ten targets include several heavily interfering ones; these illustrated the improvements obtained by using QDS gate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Elements from data fusion, optimisation and particle filtering are brought together to form the Multi-Sensor Fusion Management (MSFM) algorithm. The algorithm provides a framework for combining the information from multiple sensors and producing good solutions to the problem of how best to deploy/use these and/or other sensors to optimise some criteria in the future. A problem from Anti-Submarine Warfare (ASW) is taken as an example of the potential use of the algorithm. The algorithm is shown to make efficient use of a limited supply of passive sonobuoys in order to locate a submarine to the required accuracy. The results show that in the simulation the traditional strategies for sonobuoy deployment required approximately four times as many sonobuoys as the MSFM algorithm to achieve the required localisation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing time requirements of several algorithms for solving the 2-d (also called single frame) linear assignment problem are compared, along with their accuracy given either random or biased measurement errors. The specific problem considered is that of assigning measurements to truth objects using costs that are the chi-squared distances between them. Performance comparisons are provided for the algorithms implemented both in a compiled language C or FORTRAN) as well as the interpretive MatLab language. Accuracy considerations show optimal assignment algorithm is preferred if biased measurement errors are present. The Jonker-Volgenant-Castanon (JVC) algorithm is the preferred approach considering both average and maximum solution time. The Auction algorithm finds favor due to being both efficient as well as easy to understand, but is never faster and often much slower than the JVC algorithm. Both algorithms are dramatically faster than the Munkres algorithm. The greedy nearest neighbor algorithm is an ad hoc solution to provide a sub-optimal but unique solution more cheaply than the optimal assignment algorithms. However, the JVC algorithm is as fast as the greedy for simple problems, marginally slower at hard problems, and is vastly more accurate in the presence of measurement biases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a small moving object method detection in video sequences is described. In the first step, the camera motion is eliminated using motion compensation. An adaptive subband decomposition structure is then used to analyze the motion compensated image. In the low-high and high- low subimages small moving objects appear as outliers and they are detected using a statistical Gaussianity detection test based on higher order statistics. It turns out that in general, the distribution of the residual error image pixels is almost Guassian. On the other hand, the distribution of the pixels in the residual image deviates from Gaussianity in the existence of outliers. Simulation examples are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regions of interest that contain small targets often cover a small number of pixels, e.g., 100 or fewer. For such regions vision-based super-resolution techniques are feasible that would be infeasible for regions that cover a large number of pixels. One such technique centers basis functions (such as Gaussians) of the same width on all pixels and adjusts their amplitudes so that the sum of the basis functions integrated over each pixel is its gray value. This technique implements super-resolution in that the sum of basis functions determines the gray values of sub-pixels of any size. The resulting super-resolved visualizations, each characterized by a different basis function width, may enable the recognition of small targets that would otherwise remain unrecognized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An obvious use for feature and attribute data is for target typing (discrimination, classification, identification, or recognition) and in combat identification. Another use is in the dark (or track) association process. The data association function is often decomposed into two steps. The first step is a preliminary threshold process to eliminate unlikely measurement-track pairs. This is followed by the second step, the process of selecting measurement-track pairs or assigning weights to measurement-track pairs so that the tracks can be updated by a filter. The primary concern of this paper is the use of feature and attribute data in the data association process for tracking small targets with data from one or more sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.