PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resolution limitations have a significant impact on accuracy of small target parameters estimated from imagery. This paper describes a model-based method for small target parameter estimation. The method requires an a priori model of the target, and under many conditions will provide better results than deconvolution relying on pixel data alone. A description of the algorithm will be given, along with examples illustrating performance in a variety of situations, including a demonstration of the ability to handle some degree of saturation in the imaging process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of estimation of the correlation functions of processes at the outputs of the optimal and generalized detectors is proposed. The time interval for the input stochastic process is bounded. Proposed method allows to determine the detection characteristics of the optimal and generalized detectors more carefully. Using the proposed method of the correlation function estimation we can make a prediction about the probability of detection with the great accuracy both for the optimal detector and for the generalized detector taking into consideration the signal base. In line with the proposed method of the estimation of correlation functions the probability distribution density of the background noise at the output of the generalized detector is determined more exactly. There is a correlation between the parameters of the probability distribution density of the background noise at the output of the generalized detector and the signal base. This correlation exerts the essential action the detection characteristics that is very important in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An innovative approach is being used to implement and simulate the IR and laser radar signal processing algorithms for the Advanced Sensor Technology Program and the Discrimination Interceptor Technology Program. Although the algorithms will run on four different computer architectures, they will use the same source code for all implementations. The initial development and testing will occur in Mathcad on a Windows 95/NT personal computer, then move to simulation on a Silicon Graphics workstation, then to scaled real-time simulation on a parallel high performance computer (HPC), and finally to the actual flight processor, the miniaturized parallel Wafer Scale Signal processor (WSSP) with a MIMD architecture. This flexibility is accomplished with code wrappers that implement interchangeable interface layers for the code modules, one wrapper for Mathcad matrices, one for C++ objects on the workstation, one for message passing with static routing on the HPC, and one for dynamically routed message passing on the WSSP. With this approach, developers can move modules back and forth from the workstation simulation environment to the implementation hardware. This will eliminate the need to maintain different versions of the same algorithm. The signal processing algorithms will be modified to work in a massively parallel architecture, with a message passing interface, which is simulated on the Silicon Graphics workstation, emulated on the HPC, and implemented on the WSSP. This approach will allow for pipeline processing as well as multiple, concurrently running instances of modules. In addition, innovative algorithms will fuse active laser radar detections and passive multicolor IR sensor measurements to improve target state estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an analytic model which generates a synthetic list of detection observations from an IRST. The observation list contains both false detects and target detections. The false detects are generated from a statistical model of the clutter and noise. The user is able to select from a menu of clutter types. This selection determines the values of the statistical parameters. The target type and trajectory are user specified. The target type is selected from a menu and determines the signature of the target. Both the target signature and clutter are propagated through the atmosphere and the sensor. The sensor is modeled as the cascade of transfer functions. The sensor model includes optics, detectors, electronics and noise sources. The signal processing which is part of the sensor model assumes a matched filter is used to increase the S(C + N)R prior to detection. The detection threshold is set to provide the user specified probability of false alarm. Each entry in the observation list includes the observation list includes the observation time, the angular position of the observation, the estimated S(C + N)R of the observation and the number of degrees of freedom which is a measure of clutter severity in the region of the observation. The model is intended to be used as part of a larger simulation for example in a sensor fusion study or to provide tracker test sequences for performance comparison and evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of small, weak targets collected from electro- optical radiation is a challenging problem, particularly in the presence of nonstationary backgrounds. In this paper, we propose a theoretical justification for the loss in performance of slowly moving targets in regions of benign clutter. In particular, a K-means segmentation technique is developed using a fixed number of classes and a variety of local scene features. This class map is used by a 3D matched filter to estimate a covariance matrix for each region. The filter would then whiten each region using the appropriate class map. The algorithm is applied in this paper to actual sensor data which contains heterogeneous scenes taken from the Airborne IR Measurement Systems sensor. Performance is assessed through the measure of SNRs and receiver operating characteristics curves based on a suite of injected targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many different tracking methods and algorithms in development and use. It would be useful to compare these methods to determine which tracking algorithms are most appropriate for different situations. To make it possible to compare these methods a common database of input data has been set up. This database includes raw image sequences, filtered image sequences and detection lists. The data may be used by TBD and TAD trackers. The raw image sequences have been reprocessed to correct bad channels and remove pointing and stabilization errors and are available to the tracking community. Researchers are invited to make use of this data nd present their results. This paper describes the database, how to access it and the results of using the Multiple Hypotheses Tracking algorithm on the image sequences contained within. A description of the signal processing used on the images and the MHT algorithm will also be included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of track-before-detect (TBD) has been shown to be effective for enhanced target detection in certain cases but performance has been elusive in others. The TBD concept enhances detection by integrating multiple frames of data collected over a period of several seconds. Since the integration period is so long,t he target's potential motion must be compensated for. TBD hypothesizes different target trajectories to account for target motion. The problem in analyzing or quantifying TBD behavior is the statistical correlation or dependence among the hypotheses. In this paper we present a theoretical framework for analyzing the TBD concept by utilizing the theory of simultaneous statistical inference. We address the problem by adapting the method of simultaneous confidence intervals developed by Henry Scheffe. This method uses the fact that the confidence ellipsoid of a Gaussian random vector may be constructed as a hyperplane in the detection space. We declare a detection whenever the observation falls on one side of the hyperplane. Then, using the Scheffe construction, we can approximate the distribution for the TBD detection statistic. Finally, we establish performance bounds and quantify the relationship between the number of hypotheses and detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an earlier conference, we introduced a powerful class of temporal filters, which have outstanding signal to clutter gains in evolving cloud scenes. The basic temporal filter is a zero-mean damped sinusoid, implemented recursively. Our final algorithm, a triple temporal filter, consists of a sequence of tow zero-mean damped sinusoids followed by an exponential averaging filter along with an edge suppression factor. The algorithm was designed, optimized and tested using a real world database. We applied the Simplex algorithm to a representative subset of our database to find an improved set of filter parameters. Analysis led to two improved filters: one dedicated to benign clutter conditions and the other to cloud clutter-dominated scenes. In this paper, we demonstrate how a fused version of the two optimized filters further improves performance in severe cloud clutter scenes. The performance characteristics of the filters will be detailed by specific examples and plots. Real time operation has been demonstrated on laboratory IR cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of small targets, especially of a target within a single frame of data, is an important problem in image processing. A lot of work have already been proposed for this problem. Of them, nonlinear morphology-based methods have shown their advantages than traditional linear methods in target detection. Many of these methods are based on local background estimate and threshold computation. A thresholding procedure is required for the methods. However, none of them shows how to get an appropriate threshold and what is the relation between a threshold and the detection performance of the detector. In this paper, we use the contrast between target and local background to be the measurement of characteristic of targets and background. Here, the contrast between target and local background is the ratio of target residual, which is acquired by subtracting local background estimate from the original image, and local background estimate. By analyzing the difference between the contrast between target and local contrast between background and local background, we can determine an appropriate threshold which can achieve high probability of detection while produce very small probability of false alarm. Experiments on a large read sea- surface ship images prove the effectiveness of the method presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the methodology and assessment of the passive and active sensor feature selection algorithms of interest to the Discriminating Interceptor Technology Program (DITP), as applied to aimpoint selection. The analysis identifies the performance achieved by utilizing individual sensor features and multi-sensor feature fusion. Traditional methods of determining the proper aimpoint have depended upon identifying target characteristics such as the geometric centroid, radiometric centroid, leading edge, and the trailing edge. Once these target points have been defined, the final aimpoint is selected by adding a bias to the target characteristic. However, these and similar algorithms have shown sensitivity to a priori knowledge of the threat, such as aspect angle, target length, thermal and dynamic characteristics. This paper will assess the utility of multi-sensor disparate sensor data to decrease performance sensitivity to a priori knowledge of the threat. These algorithms are among the feature selection algorithms in use in the DITP program for a passive and active fused sensor discrimination. The analysis utilizes simulated IR and ladar sensor data of a conical body that is initially at sufficient range to be realized as a slightly extended source on the focal plane, and then advanced through the later phase of the engagement to the point where the target is relatively close and highly resolved. The measures of performance consist of evaluating the deviations of the estimated aimpoint versus range to target, orientation angle, and aspect angle, for the feature selection algorithms considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Glint is a major error source for radar tracking at short ranges. It occurs because a target is not a single reflector, but is a collection of reflectors. The returns from these multiple sources may interfere, leading to apparent shifts in the target position. Glint is non- Gaussian and correlated in nature, and may even have infinite variance. Previous work on glint includes the use of the score method to reduce the effect of large errors, and the use of filters which account for the correlation. Only limited work has been done using multiple model techniques. The importance of glint in missile applications justifies further work in the area. The work presented here investigated several multiple hypothesis techniques which seemed particularly suited to this problem. One approach was to approximate the error distribution with a Gaussian mixture. This generates a multiple hypothesis filter, where each hypothesis uses a simple extended Kalman filter. This was extended to take account of the correlation structure of the glint. These methods were tested against a fairly realistic representation of glint which provided a stressing test for the filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced track-after-detect (TAD) trackers are able to operate with detection thresholds as low as 9.5dB with the use of track features. At lower threshold the increased number of false alarms inhibits track confirmation. In order to track weaker targets, the target SNR must be increased prior to detection. Assuming that the SNR has been increased as much as possible through signal processing, further increase in SNR can be obtained by preceding the detection threshold with a track-before-detect algorithms. This paper analyzes the performance of the cascade of a TBD and a TAD tracking algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selection of an appropriate dynamical model for approximating the target motion during a maneuver is critical to the design of the state estimator that reliably performs tracking of a target executing complex maneuvers. Due to the diversity in the possible maneuvers that could be executed, a number of different models may need to be included in the design of a satisfactory tracking algorithm, with corresponding increase in implementation complexity. A novel target motion model, termed Equivalent Velocity Tracking Model (EVTM), is proposed in this paper which is capable of providing good approximations to target motions during different types of maneuvers. The design of a target racking architecture that utilizes the EVTM and employs a neural network-assisted Kalman filter is outlined. Quantitative results form several tracking experiments are provided to illustrate the performance benefits resulting from the use of EVTM in the design, and are also compared with the performance resulting from other algorithms based on traditional models and multiple model approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new design for a Bayesian field track-before-detect processor was studied in a joint effort by Metron, Inc. and Raytheon Systems Company. The principal design distinction is that bins in state space are not invoked as in many other field approaches. Instead, sampling and interpolation of the log-odds density field is employed. The plant model, a Markov process, and the measurement log-likelihoods are thereby both more accurately represented, avoiding the losses from diffusion and misrepresentation of likelihoods common to other approaches. A companion performance model based upon the statistics of the log-odds density at the true target position and the statistics of the ambient field elsewhere provided predictions for expected log-odds density growth, time to declare a target at given false-alarm rate, and other critical dependencies. Experiments incorporating synthetic data based upon Gaussian target and noise models and with parameters chosen to approximate an IRST cruise missile detection problem were run. Comparisons between the actual Bayesian tracking results and the analytical predictions showed good agreement. In addition, single-target comparative runs using identical synthetic data were made between the Raytheon multiple-hypothesis tracker (MHT) and the present single-target Bayesian field processor, verifying both that the processing gain of the Bayesian processor was close to optimal and that the resulting gain in single-target sensitivity relative to the unaided MHT baseline was approximately 6 dB. It is believed that such a sensitivity difference is characteristic of any track-file based method compared against the present Bayesian held approach. It does not, however address the added multiple-target, interacting multiple model (1MM), and other functions which MHT provides. Recognizing this, exploratory efforts to employ the track-before-detect processor as a front end to the Raytheon MHT were undertaken which achieved target detection with a single false alarm in a realistically sized surveillance space, suggesting that this hybrid architecture might provide a good design option which complements the sensitivity of the Bayesian field approach with the robustness, efficiency, and added multiple-target capabilities of the MHT. Keywords: Bayesian, tracking, detection, MHT, track-before-detect
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, radar/sonar waveform selection is investigated form a system engineering viewpoint using the hybrid conditional averaging (HYCA) method. HYCA is a recently developed technique for evaluating tracking performance, and is meeting with increased acceptance due to this ability to work with the dynamic interaction between tracking and imperfect detection. In a previous paper HYCA was used as an analysis tool to select among various apportionments of constant-frequency and linear frequency- sweep waveforms, and the effect of missed detections was incorporated; the tracking mechanism was taken as a Kalman filter. In this paper our analysis is refined to deal both with false alarms and an improved detection model. With respect to the former, the Kalman estimator has been replaced by a probabilistic data association filter, in which Poisson-distributed false-alarms may be included in a natural way. With respect to the latter, the resolution cells have been redefined according to a tessellating grid in range and range-rate, with the effect of detections in neighboring cells caused by ambiguity-function 'sidelobes' and over/undersampling accounted for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A seeker is one of the most significant components in a missile guidance system. Because an advanced missile is required to operate in highly dynamic scenarios, the ability to handle uncertainties is important to the filter of a missile seeker. Sources for uncertainties are various. For instance, when a nonlinear system is linearized, the unmodeled high frequency dynamics could corrupt the filtering process when a large persistent estimation error accumulates. Facing the uncertainties, we propose to develop a robust filter for a seeker so that the uncertainties are precompensated in design. First, the uncertainty in a linear observation equation, which is the approximation of a nonlinear equation, is replaced by an equivalent noise process. The equivalent noise process is added to the nominal system and an auxiliary system is formed. A Kalman filter is designed for the auxiliary system so that an unbiased estimate of the system can be reached. Because the uncertainty in the original system is compensated, the filter for the auxiliary system is robust for the original system in the sensor that it provides an error bound for the nominal system with admissible uncertainties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The significant challenge in long range low SNR IR target acquisition is performing reliable target detection and classification while maintaining a reasonable false alert rate. Existing IR systems experience excessive amounts of clutter and false alerts. A target motion discriminator (TMD) technique based on the interacting multiple model using probabilistic data association with amplitude information estimation algorithm is presented to exploit the consistency of position and motion information observed consistency of position and motion information observed over multiple scans to form a robust classification decision. Several motion class models reside within the TMD, to provide accurate target state estimates and target likelihood functions for both maneuvering and non- maneuvering contacts. Classification performance is dramatically enhanced by using a 'no target model' to reject contacts which exhibit erratic motion, i.e., inconsistent with a target motion model while promoting contacts with consistent motion. The accurate estimation of target dynamics obtained by the IMM approach provides the capability to reject clutter and reliability detect a dim threat. A robust sequential likelihood ratio test which minimizes the decision time and improves target declaration performance is developed and demonstrated using real data collected under various environmental conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the development of a variable structure interacting multiple model estimator for tracking groups of ground targets on constrained paths using moving target indicator reports obtained from an airborne sensor. The targets are moving along a highway, with varying inter- visibility due to changing terrain conditions. In addition,the roads can branch, merge or cross. Some of the targets may also move in an open field. This constrained motion estimation problem is handled using an IMM estimator with varying mode sets depending on the topography. The number of models in the IMM estimator, their types and their parameters are modified adaptively, in real-time, based on the estimated position of the target and the corresponding road/visibility conditions. This topography-based variable structure mechanism eliminates the need for carrying all the possible models throughout the entire tracking period as in the standard IMM estimator, improving performance and reducing computational load.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general multiple-model estimator with a variable structure, called likely-model set (LMS) algorithm is presented. It uses a set of models that are not unlikely to match the system mode in effect at the given time. Different versions of the algorithm are discussed. The model set is made adaptive in the simplest version by deleting all unlikely models and activate all models to which a principal model may jump to anticipate the possible system mode transition. The generality, simplicity and ease in the design and implementation of the LMS estimator are illustrated via an example of tracking a maneuvering target and an example of fault detection and identification. Comparison of its cost-effectiveness with other fixed- structure and variable-structure multiple-model estimators is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking maneuvering targets is a complex problem which has generated a great deal of effort over the past several years. It has now been well established that in terms of tracking accuracy, the Interacting Multiple Model (IMM) algorithm, where state estimates are mixed, performs significantly better for maneuvering targets than other types of filters. However, the complexity of the IMM algorithm can prohibit its use in these applications of which similar algorithms cannot provide the necessary accuracy and which can ont afford the computational load of IMM algorithm. This paper presents the evaluation of the tracking accuracy of a multiple model track filter using three different constant-velocity models running in parallel and a maneuver detector. The output estimate is defined by selecting the model whose likelihood function is lower than a target maneuver threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate an adaptive interacting multiple model (AIMM) tracker using the extended Kalman filter. This adaptive algorithm is based on the interacting multiple model (IMM) tracking technique with the addition of an adaptive acceleration model to track behavior that falls in between the fixed model dynamics. In previous research, we found that the adaptive model matches more closely the true system dynamics when the target kinematics lie in between the fixed models, thus improving the overall performance of the tracking system. We also showed that the AIMM outperforms other existing adaptive approaches while reducing computational complexity. In this paper, we further investigate these superior qualities of the AIMM by considering a more realistic radar-tracking scenario where monopulse radar range, azimuth, and elevation measurements are processed using extended Kalman filters in the AIMM. Here a more complex 3D simulation is implemented instead of the simplified 2D problem considered in our previous research. Again, the result how that the AIMM outperforms the classical IMM when the target is maneuvering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dynamics of many physical system are nonlinear and non- symmetric. The motion of a missile, for example, is strongly determined by aerodynamic drag whose magnitude is a function of the square of speed. Conversely, nonlinearity can arise from the coordinate system used, such as spherical coordinates for position. If a filter is applied these types of system, the distribution of its state estimate will be non-symmetric. The most widely used filtering algorithm, the Kalman filter, only utilizes mean and covariance and odes not maintain or exploit the symmetry properties of the distribution. Although the Kalman filter has been successfully applied in many highly nonlinear and non- symmetric system, this has been achieved at the cost of neglecting a potentially rich source of information. In this paper we explore methods for maintaining and utilizing information over and above that provided by means and covariances. Specifically, we extend the Kalman filter paradigm to include the skew and examine the utility of maintaining this information. We develop a tractable, convenient algorithm which can be used to predict the first three moments of a distribution. This is achieved by extending the sigma point selection scheme of the unscented transformation to capture the mean, covariance and skew. The utility of maintaining the skew and using nonlinear update rules is assessed by examining the performance of the new filter against a conventional Kalman filter in a realistic tracking scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a theoretically rigorous, systematic, and unified approach to the problem of estimating the numbers, identifies, and kinematics of multiple targets observed by multiple sensors. The approach, which is based on a new statistical approach called finite-set statistics (FISST), is motivated in part by the following observation: When the number of targets is unknown, we encounter fundamental conceptual and practical difficulties when trying to do Bayes-optimal multitarget filtering and estimation in accordance with standard statistical reasoning. We describe these difficulties, outline the FISST approach to resolving them, and illustrate FISST by applying it to a model problem due to Oliver Drummond.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of multiple sensors for target tracking has been intensely investigated in recent years. The techniques for integrating multiple sensors are complex but have the potential to provide very accurate state estimates. For most systems, each sensor provides its information to a central location where the integration is performed. This approach is typically employed for a single platform and the resulting composite track can be very accurate when compared to the individual sensor tracks. Additional platforms can also contribute their information to improve this composite track. This composite track has the potential for enhanced system decisions and to provide targeting information not otherwise available. This paper presents algorithms for the composite tracking of maneuvering targets through the use of effective multisensor-multiplatform integration and simulation results illustrating the benefits of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The benchmark problem addresses the efficient allocation of an agile beam radar in the presence of highly maneuverable targets and radar ECM. The multisensor benchmark tracking solution s aided by the presence of a scanning IRST. This paper presents methods for applying an IMM/MHT tracker to this multiple sensor tracking and resource allocation problem. The paper discusses the manner is which IMM/MHT tracking and data association methods lead to efficient agile beam radar allocation and presents results showing that this approach is significantly more efficient than previously proposed methods when only radar data are used. It presents a hybrid multisensor tracking architecture in which an IMM/MHT tracker operating on IRST data provides the global IMM/MHT tracker with selected observations. Simulation results quantify the potential improvement from the use of advanced tracking methods and IRST data to enhance agile beam radar tracker capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides upper and lower bounds on the tracking accuracy achievable through a distributed network of sensors providing non-linear measurements of a moving target to an extended Kalman filter. Those bounds follow from analytical relations derived for the discrete Kalman filter covariance for a fixed target. The estimation of achievable accuracy leads to an observability and performance analysis function of the sensor locations. The relationship developed are applied to the case of a distributed network of 2D search sensors measuring range and azimuth to a target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The uncertainty in the time of ballistic missile transition from boost to coast phase poses significant problems to tracking sensors. The Interacting Multiple-Model (IMM) filter monitors this transition through the application of two Kalman filters, whose inputs and outputs are weighted by computed probabilities that the missile is in boost or coast phase. The ability of IMM to produce accurate tracks early in the midcourse phase is compared with that for simpler models that use a 'nearest neighbor' approach to determine the best fit to sensor measurements. Track output -- state vectors and error covariance matrices -- are handed off in the post-boost phase to midcourse sensors, which propagate tacks using either centralized measurement fusion or tracklets; that is, tracks computed so that their errors are not cross-correlated with those from other tracks of the same target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many tracking and guidance problems may be formulated as a terminating stochastic game in which the distribution of outcomes is affected by the intermediate actions. Traditional technique ignore this interaction. In this paper we develop an information gathering strategy which maximizes the expected gain of the outcome. For example, the objective could be a function of the terminal miss distance and target identify with penalties for missing a valid target or attacking a friendly one. Several trade-offs are addressed: the increased information available from taking more measurements, the fact that an increased number of measurement may adversely affect change of success and the fact that later measurements may be more informative but also may be of little use since there my not be enough time available for reaction to this extra information. The problem is formulated so that we are required to choose, under uncertainty, an alternative from a set of possible decisions. This set has a discrete uncertainty as to the number of measurements to be taken and a continuous uncertainty as to where and when the measurements should be taken. Preferences over consequences are modeled with a utility function. We propose to choose as optimal the alternative which maximizes expected utility. A simulation based approximation to the solution of this stochastic optimization problem is outlined. This relies on recent developments in dimensions swapping Markov Chain Monte Carlo (MCMC) techniques. The use of MCMC methodology allow us to explore the expected utility surface and thus select a measurement strategy. The resulting algorithm is demonstrated on a simple guidance problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an algorithm using a discrimination- based sensor effectiveness metric for sensor assignment in multisensor multitarget tracking applications. The algorithm uses interacting multiple model Kalman filters to track airborne targets with measurements obtained from two or more agile-beam radar systems. Each radar has capacity constraints on the number of targets it can observe on each scan. For each scan the expected discrimination gain is computed for the sensor target pairings. The constrained globally optimum assignment of sensor to targets is then computed and applied. This is compared to a fixed assignment schedule in simulation testing. We find that discrimination based assignment improves track accuracy as measured by both the root-mean-square position error and a measure of the total covariance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design of a tracker has a significant effect on the performance. Design is, however, not a trivial task, which requires much experience. This paper presents theoretically sold designs of the tracking filters based on a newly- introduced concept of tracking probability. Specifically, analytic result for the selection of the transition probability, the initial tracking probability, and the thresholds for tack confirmation and termination are presented. Supporting simulation results are also given for the design of the recently-developed intelligent probabilistic data association filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a multiframe moving for the solution of the data association problem in multisensor-multitarget tracking requires the repeated solution of a multidimensional assignment problem. This problem differs from its predecessor only by the addition of the new scan of measurements. In addition, the multidimensional assignment problem is an MP-hard problem which is large scale and sparse yet has 'real-time' solution requirements. The use of relaxation techniques to solve the multidimensional assignment problem has proven to be an effective scheme within the context of a multiframe moving window. This work demonstrates the improved efficiency that is obtained by the use of hot starts in conjunction with a relaxation method in the data association problem. The idea is to use solution information from the previous frame in conjunction with new information from the current problem to hot start the data association problem. Computational results for various tracking scenarios have shown that hot starts can significantly reduce the amount of time needed to solve the data association problem without affecting solution quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple tracks for a single target arise in skywave over-the-horizon radar (OTHR) tracking, due to multiple ionospheric propagation paths between the target and the radar sites. A multihypothesis track fusion algorithm was developed and reported in an earlier paper, where all possible path-dependent track-to-target association hypotheses were constructed, and the probability of each hypothesis evaluated [D.J. Percival and K.A.B. White Proc SPIE 3163, pp. 363-374, 1997] . The implementation of recursive hypothesis evaluation and fused track state estimation is the subject of this paper. Sources of multipath track dependence are identified, and their treatment discussed. The time evolution of the track-to-target hypothesis probabilities and target state estimates is illustrated for example multitarget OTHR tracking scenarios using a simple stochastic ionospheric model which admits multipath propagation. Keywords: multipath track fusion, multihypothesis tracking, over-the-horizon radar
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monopulse radar tracking of target elevation for objects flying close to a reflecting surface is difficult due to interference between the direct echo and surface-reflected target echoes. Ideally, target height could be estimated directly from the probability density for monopulse measurements given target range and height. This direct approach is usually unfeasible because the density generally has many false peaks so there are multiple solutions for target height. This paper describes a nonlinear filter that exploits this behavior to estimate target height. The filter recursively computes the probability density for height and vertical velocity conditioned on the monopulse measurement sequence. The time evolution of this density between measurements is determined by a Fokker-Planck partial differential equation. This is solved in real-time using a finite difference scheme. The monopulse measurement probability density is computed from a physical model and used to update the conditional target state density using Bayes' rule. In simulation testing for a generic C-band shipboard radar the filter is able to reliably acquire and track transonic targets through mild maneuvers with about 12 m root-mean-square height accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sampling based bootstrap filter is applied to the problem of maintaining track on a target in the presence of intermittent spurious objects. This problem is formulated in a multiple hypothesis framework and the bootstrap filter is applied to generate the posterior distribution of the state vector of the required target - i.e. to generate the target track. The bootstrap technique facilitates the integration of the available information in a near-optimal fashion without the need to explicitly store and manage hypotheses from previous time steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a novel data association, termed m-best SD, that determines in O(mSkn3) time the m-best solutions to an SD assignment problem. THis algorithm is applied to the following problem. Given line of sight measurements from S sensors, sets of complete position measurements are extracted, namely, the 1st, 2nd,..., m-th best sets of composite measurements are determined solving a static SD assignment problem. Utilizing the joint likelihood functions used to determine the m-best SD assignment solutions, the composite measurements are then quantified with a probability of being correct using a JPDA-like technique. Lists of composite measurements from successive scans, along with their corresponding probabilities, are then used in turn with a state estimator in a dynamic 2D assignment algorithm to estimate the states of the moving targets over time. The dynamic assignment cost coefficients are based on a likelihood function that incorporates the 'true' composite measurement probabilities obtained from the (static) m-best SD assignment solutions. We demonstrate this algorithm on a multitarget passive sensor track formation and maintenance problem, consisting of multiple time samples of line of sight measurements originating from multiple synchronized high frequency direction finding sensors. Another significance of this work is that the m-best SD assignment algorithm provides for an efficient implementation of a multiple hypothesis tracking algorithm by obviating the need for a brute force enumeration of an exponential number of joint hypotheses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection in water is achieved by optical registration and measurement of the shape of the surface disturbance of liquid caused by radiation pressure of ultrasound waves passing by or reflected from the object. In the first case, the effects of refraction and diffraction of a laser beam by the 'hump' are used. This method is very sensitive for detection and observation of dynamic underwater objects. In the second case the Talbot effect is used. Both methods allow to detect nd observe the dynamic objects inside a liquid. This report describes: the schemes and the equipment used for measuring the underwater objects; the results of the 'hump's' 3D shape measurement and its usage for creating 'humped' ultrasound hologram; the image of a small target produced by computer reconstruction of 'humped' ultrasound hologram. This report also demonstrates the possibility of using Talbot effect for measurement of the shape and thickness of oil films on liquid and solid surfaces and for the wettability angle measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A static multiple-model (SMM) estimation and decision algorithm has two functions: estimate the state of the system and decide which model is the best representation of the system. This paper concentrates on static multiple-model systems, that is, there is only one mathematical model applicable to a sequence of measurements, that model is one of a number of known possible mathematical models, but which one of these models is applicable is not known. In this paper, the characteristics of both estimation and decision errors of three SMM optimal algorithms are evaluated with a variety of performance measures using a Monte Carlo simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been a growing interest in tracking maneuvering point-targets since the last decade. Up to now however, no algorithm has been proposed for tracking maneuvering and bending extended target in clutter. We develop in this paper a new approach based on Interacting Multiple Model combined with a new fast point pattern matching algorithm to solve this new tracking problem. Simulation results for a bending target having four dynamic models and possibly tow point- patterns are given to illustrate the ability of this new approach to track a maneuvering and bending extended target in 2D space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Probabilistic Multi-Hypothesis Tracker (PMHT) of Streit and Luginbuhl uses the EM algorithm and a slight modification of the usual target-tracking assumptions to combine data-association and filtering. The performance of the PMHT to date has been comparable to that of existing tracking algorithms; however, part of its appeal is a consistent and extensible statistical foundation, and it is the extension to the tracking of maneuvering targets which we explore in this paper. The basis, as with many algorithms designed for maneuvering targets, is of an underlying and hidden 'model-switch' process controlled by a Markov probability structure. Performance of the modified PMHT is investigated both for maneuvering and non-maneuvering targets. The improved performance observed in the latter case is somewhat surprising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of dim targets in heavy clutter requires large gains in the SCR. Gains of the required magnitude have been obtained with space-temporal processing. However, in many cases these gains are either difficult or expensive to realize. If the range to the clutter is small relative to the clutter velocity, the temporal processing will need to include scene registration and optical flow correction. Scene registration is computationally expensive especially for large search volumes. The correction of optical flow is both expensive and typically less than satisfactory. The spectral dimension provides an alternative to the temporal dimension. Since the data in each of the spectral bands is collected simultaneously or nearly so, the problems of registration and optical flow are eliminated. This paper considers the performance of the multi-spectral IR bands. Dual band performance results comparing space spectral processing with space temporal will be shown. An analytic model of the probability of false alarm as a function of the number of spectral bands is presented. A comparison of this model to experimental result using multi-spectral IRST data is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a recursive temporal filter based on a running estimate of the temporal variance followed by removal of the baseline variance of each pixel. The algorithm is designed for detection/tracking of 'point' targets moving at sub- pixel/frame velocities, 0.02 to 0.50 p/f, in noise-dominated scenarios on staring IR camera data. The technique responds to targets of either polarity. A preprocessing technique, morphological in origin but implemented by median filters, further improves the S/N sensitivity of the algorithm while restricting the result to positive contrast targets. The computationally simple algorithm has been implemented in hardware and real-time operation is in evaluation. The performance is characterized by some specific examples as well as plots over our extensive database of real data. Detection down to S/N approximately 3 or less and sensitivity to the appropriate range of velocities is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the goniometric calibration of the spatial RI imaging telescope (SPIRIT) III. The SPIRIT III radiometer is the primary instrument aboard the Midcourse Space Experiment (MSX) satellite which was launched on 24 April, 1996. The sensor consists of an off-axis reimaging telescope with a six-band scanning radiometer that covers the spectrum from midwave IR to longwave IR. The radiometer has five arsenic-doped silicon focal plane detector arrays which operate at temperatures between 11 and 13 K. These arrays consists of 8 by 192 pixels, with an angular separation between adjacent pixels of approximately 90 (mu) rad. A single axis scan mirror can either remain fixed, or operate at a constant 0.46 deg/sec scan rate to give programmable fields of regard of 1 by 0.75, 1 by 1.5, and 1 by 3 degrees. The calibration, which is based on a physical model of the sensor, uses ground and on-orbit observations to determine and separate effects of scan-mirror encoder non-linearity, scan-mirror readout timing and angular velocity, detector readout timing, array coalignment, and optical distortion. This paper describes the calibration methodology and gives results using observations of stellar sources acquired during on-orbit operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most important, natural and practical approach to variable-structure multiple-model estimation is the recursive adaptive model-set (RAMS) approach. It consists of two functional components: model-set adaptation and model- set sequence conditioned estimation. This paper makes contribution to the second component. Specifically, a general, optimal single-step and highly efficient recursion for model-set sequence conditioned estimation based on an arbitrary time-varying model-set sequence is obtained by an extension of the well-known interacting multiple-model (IMM) algorithm. This recursion provides a natural and systematic algorithm, which is optimal within the RAMS approach, for assigning the probabilities to newly activated models and initializing the filters based on these models. In addition, an optimal and highly efficient fusion method is presented for obtaining the overall estimate from these based on two arbitrary model sets, not necessarily disjoint. The optimal recursion and fusion provide a solution to the problem of model-set sequence conditioned estimation that is fairly satisfactory for most practical situations. The results presented here have been employed in the recent development of two variable-structure MM estimators, the likely-mode set and the model-group switching algorithms, that are generally applicable, easily implementable, and significantly superior the best fixed-structure MM estimators available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatial density of false measurements is known as clutter density in signal and data processing of targets. It is unknown in reality and its knowledge has a significant impact on the effective processing of targets. This paper presents a number of theoretically sound estimators for clutter density based on conditional mean, maximum likelihood, least squares and method of moments estimation. They are computationally highly efficient and require no knowledge of the probability distribution of the clutter density. They can be readily incorporated into a variety of tracking filters for performance improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the progress of a collaborative effort between Canada and The Netherlands in analyzing multi-sensor data fusion systems, e.g. for potential application to their respective frigates. In view of the overlapping interest in studying and comparing applicability and performance and advanced state-of-the-art Multi-Sensor Data FUsion (MSDF) techniques, the two research establishments involved have decided to join their efforts in the development of MSDF testbeds. This resulted in the so-called Joint-FACET, a highly modular and flexible series of applications that is capable of processing both real and synthetic input data. Joint-FACET allows the user to create and edit test scenarios with multiple ships, sensor and targets, generate realistic sensor outputs, and to process these outputs with a variety of MSDF algorithms. These MSDF algorithms can also be tested using typical experimental data collected during live military exercises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an application of multiple target tracking (MTT) to microneurography, with the purpose of estimating conduction velocity changes and recovery constants of human nerve C-fibers. In this paper, the focus is on the detection and the tracking of the nerve action potentials (APs). The subsequent parameter estimation is described only briefly. Results from an application of the tracking system on real data recorded inhuman subjects are presented. Action potentials form C-fibers were recorded with a thin needle electrode inserted into the peroneal nerve of awake human subjects. The APs were detected by a matched filter constituting a maximum likelihood constant false alarm rate detector. By utilizing the multiple hypothesis tracking method, the detected APs in each trace were associated to individual nerve fibers by their typical conduction latencies in response to electrical stimulation in the skin. The measurements were 1D, and the APs were spaced in time with intersecting, piecewise continuous, trajectories. The amplitude of the APs was varying slowly over time for each C-fiber and was in general different for different fibers. It was therefore incorporated into the tracking algorithm to improve its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses tracking algorithm for post-mission processing of radiometer data from the Midcourse Space Experiment (MSX) Spatial IR Imaging Telescope. In order to produce tracked target signatures form on-orbit MSX satellite data, track processing algorithms are implemented using a deferred resolution approach for measurement-to- track correlation. This method defers the final association of point source object sighting messages (OSMs) to established tracks until further information is received form subsequent sans. This is accomplished through the use of a track splitting algorithm to decrease the likelihood of miscorrelations at the expense, however, of more extensive logic and longer processing times. Performance is evaluated by comparing results to that of an immediate resolution algorithm using MSX on-orbit measured data as a test case. This effort is funded under the MSX Theater and Midcourse Cooperative Target Experiments task on the Systems Engineering and Technical Assistance Contract with the US Army Space and Missile Defense Command.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple experiment is used to empirically show that adding measurement dimensions to the process of determining correct correlation can reduce performance under some conditions. The author noted this effect during the checkout of a complicated model and used an experiment to model the effect more simply. The experiment consist s of two targets located at the same elevation but at differing azimuths. A track file consisting of an azimuth estimate and an elevation estimate is realized for each target. An azimuth and elevation measurement is then realized for each target. There are tow possible ways to uniquely pair the measurements to the tracks. The standard square residual (SSR) scores of the two unique solutions are compared to determine measurement to track assignment. This process is performed many times to create a probability of correct correction (Pcc) value. Two methods are used for the SSR calculation. For the first, azimuth values only are used. For the second, elevation values are included in the SSR calculation. Including elevation reduces the Pcc by 2-3 percent. Additionally, Pcc values vary with the elevation measurement variance. These results are surprising to the author and can be interpreted to mean that adding information to a decision process may in fact degrade the theoretical correctness of the decision. Some excursions to the basic experiment are presented including a case where there is track separation in the elevation direction. Two forms of the underlying mathematics are provided including a case where there is track separation in the elevation direction. Two forms of the underlying mathematics are provided although they are currently intractable to the author. The checkout that was performed to verify this effect is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.