PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Finite-set statistics (FISST) is a direct generalization of single-sensor, single-target Bayes statistics to the multisensor-multitarget realm, based on random set theory. Various aspects of FISST are being investigated by several research teams around the world. In recent years, however, a few partisans have claimed that a "plain-vanilla Bayesian approach" suffices as down-to-earth, "straightforward," and general "first principles" for multitarget problems. Therefore, FISST is mere mathematical "obfuscation." In this and a companion paper I demonstrate the speciousness of these claims. In this paper I summarize general Bayes statistics, what is required to use it in multisensor-multitarget problems, and why FISST is necessary to make it practical. Then I demonstrate that the "plain-vanilla Bayesian approach" is so heedlessly formulated that it is erroneous, not even Bayesian denigrates FISST concepts while unwittingly assuming them, and has resulted in a succession of algorithms afflicted by inherent -- but less than candidly acknowledged -- computational "logjams."
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The particle filter is an effective technique for target tracking in the presence of nonlinear system model, nonlinear measurement model or non-Gaussian noise in the system and/or measurement processes. However, the current particle filtering algorithms for multitarget tracking suffer from high computational requirements. In this paper, we present a new implementation of the particle filter, called the tagged particle filtering (TPF) algorithm, to handle multitarget
tracking problems in an efficient manner. The TPF uses a separate sets of particles for each track. Here, each particle is associated with the closest (in terms of likelihoods) measurement. The particles for a particular track may form separate groups in terms of the measurements associated with them and they evolve independently in groups till two or more groups of particles are separated by a distance large enough to be called separate tracks. A
decision is made as to which of the groups is to be retained. Since
this algorithm keeps a separate set of particles for each track, the state estimation for individual tracks does not require any additional computation. Also, this algorithm is association free and target class information can be added to the state for feature aided
tracking. Simulation results are obtained by applying this tracking filter to a spawning target scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the DARPA Video Verification of Identity (VIVID) program is to develop an automated video-based ground targeting system for unmanned aerial vehicles that significantly improves operator combat efficiency and effectiveness while minimizing collateral damage. One of the key components of VIVID is the Multiple Target Tracker (MTT), whose main function is to track many ground targets simultaneously by slewing the video sensor from target to target and zooming in and out as necessary. The MTT comprises three modules: (i) a video processor that performs moving object detection, feature extraction, and site modeling; (ii) a multiple hypothesis tracker that processes extracted video reports (e.g. positions, velocities, features) to generate tracks of currently and previously moving targets and confusers; and (iii) a sensor resource manager that schedules camera pan, tilt, and zoom to support kinematic tracking, multiple target track association, scene context modeling, confirmatory identification, and collateral damage avoidance. When complete, VIVID MTT will enable precision tracking of the maximum number of targets permitted by sensor capabilities and by target behavior. This paper describes many of the challenges faced by the developers of the VIVID MTT component, and the solutions that are currently being implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of multisensor-multitarget tracking is mainly dependent on the data association. In this paper, the fuzzy logic-based single target tracker is extended to the multitarget case. Multitarget scenario incorporating four targets both maneuvering and non-maneuvering in the same surveillance volume is analyzed. The proposed multitarget tracker, also called the Multitarget Tracking - Fuzzy Data Association “MTT-FDA” tracker, employs fuzzy variables capable of resolving the problem of multiple crossing targets. These variables are the rate of change of the target states over a sliding window. It has been observed through simulations that a window size of five time scans is sufficient to yield acceptable results. Moreover, the proposed tracker was exercised against the realistic multitarget data set. The results reveal that the proposed fuzzy tracker yields superior performance compared to other existing tracking schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking within dense clutter environments has severely stressed modern tracking capabilities. When this is compounded by large sensor uncertainties, different platform geometries, and poor sensor quality against targets that operate under variable speeds (often below thresholds detectable by GMTI sensors) and under high maneuvers, most tracking approaches fail. Two competing approaches have gained in popularity in recent years, Multiple Hypothesis Tracking and Interacting Multiple Model. Both of these approaches rely on the principles of hybrid state estimation using Gaussian mixtures. Traditionally, the chi-squared approach has been used to assess tracking performance, whether we use a single track model or multiple models within the Gaussian mixture framework. This paper will examine the use of Kullback-Leibler metrics as a viable means of measuring the impact of data selection on model parameter estimation and compare performance with respect to the Mahalanobis distance metric. Specifically, we shall show that the Mahalanobis distance is actually a special case of the Kullback-Leibler metric when evaluating Gaussian mixture model systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and implemented an approach to performing feature-aided tracking (FAT) of ground vehicles using ground moving target indicator (GMTI) radar measurements. The feature information comes in the form of high-range resolution (HRR) profiles when the GMTI radar is operating in the HRR mode. We use a Bayesian approach where we compute a feature association likelihood that is combined with a kinematic association likelihood. The kinematic association likelihood is found using an IMM filter that has onroad, offroad, and stopped motion models. The feature association likelihood is computed by comparing new measurements to a database of measurements that are collected and stored on each object in track. The database consists of features that have been collected prior to the initiation of the track as well as new measurements that were used to update the track. We have implemented and tested our algorithm using the SLAMEM simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
In geolocating by time difference of arrival (TDOA), an array of sensors at known locations receive the signal from an emitter whose location is to be estimated. Signals received at two sensors are used to obtain the TDOA measurement. A number of algorithms are available to solve the set of nonlinear TDOA equations whose solution is the emitter location. An implicit assumption in these algorithms is that all the measurements obtained are from a single emitter. In practice, however, one has to deal with measurement origin uncertainty, which is a result of either multiple emitters being present in the region of interest, or clutter returns. In this paper, a method to determine the location of multiple emitters in a cluttered environment is presented. Several unmanned aerial vehicles (UAVs) are assumed as receivers of the electromagnetic emission from the emitter. Emissions received by different UAVs are used to obtain the TDOAs. Using a constrained optimization procedure, measurement-to-emitter associations are determined. Then, the resulting nonlinear equations are solved to find the emitter locations. An Interacting Multiple Model (IMM) estimator is used to track the located sources and to obtain their motion parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fuzzy logic expert system has been developed that automatically allocates electronic attack (EA) resources distributed over different platforms in real-time. Genetic algorithm based optimization is conducted to determine the form of the membership functions for the fuzzy root concepts. The resource manager (RM) is made up of five parts, the isolated platform model, the multi-platform decision tree, the fuzzy EA model, the fuzzy parameter selection tree and the fuzzy strategy tree. The platforms are each controlled by their own copy of the RM allowing them to automatically work together, i.e., they self-organize through communication without the need of a central commander. A group of platforms working together will automatically show desirable forms of emergent behavior, i.e., they will exhibit desirable behavior that was never explicitly programmed into them. This is important since it is impossible to have a rule covering every possible situation that might be encountered. An example of desirable emergent behavior is discussed as well as a method using a co-evolutionary war game for eliminating undesirable forms of behavior. The RM’s ability to adapt to changing situations is enhanced by the fuzzy parameter selection tree. Tree structure is discussed as well as various related examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of goal lattices for the evaluation of potential sensor actions can be used to cause a multiplicity of heterogeneous sensor systems to collaborate. Previously goal lattices have been used to compute the value to a sensor system of taking a particular action in terms of how well that action contributes to the accomplishment of the topmost goals. This assumes that each sensor system is autonomous and only responsible to itself. If the topmost goals of each sensor system's goal lattice has adjoined to it two additional goals, namely "collaboration" and "altruism", then the value system is extended to include servicing requests from other systems. Two aircraft on a common mission can each benefit from measurements taken by the other aircraft either to confirm their own measurements, to create a pseudo-sensor, or to extend the area of coverage. The altruism goal indicates how much weight a sensor management system (SMS) will give in responding to a measurement request from any other system. The collaboration goal indicates how much weight will be given to responding to a measurement request from specific systems which are defined as being part of a collaborating group. By varying the values of the altruism and collaboration goals of each system, either locally or globally, various levels of implicit cooperation among sensor systems can be caused to emerge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the recent advent of moderate-cost unmanned (or uninhabited)
aerial vehicles (UAV) and their success in surveillance, it is
natural to consider the cooperative management of groups of UAVs.
The problem considered in this paper is the optimization of the
information obtained by a group of UAVs carrying out surveillance
of several ground targets distributed over a large area. The UAVs
are assumed to be equipped with Ground Moving Target Indicator
(GMTI) radars, which measure the locations of moving ground
targets as well as their radial velocities (Doppler). In this
research the Fisher information, obtained from the information
form of Riccati equation, is used in the objective function.
Sensor survival probability and target detection probability for
each target-sensor pair are also included in the objective
function, where the detection probability is a function of both
range and range rate. The optimal sensor placement problem is
solved by a genetic algorithm based optimizer. Simulation results
on two different scenarios are presented for four different types
of prior information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides the exact solution for the bias estimation problem in multiple asynchronous sensors using common targets of opportunity. The target data reported by the sensors are usually not time-coincident or synchronous due to the different sampling times. We consider here the case when the sensors obtain the measurements at the same rate but with a phase difference. Since the bias estimation requires time-coincident target data from different sensors, a novel scheme is used to transform the measurements from the different times of the sensors into pseudomeasurements of the sensor biases with additive noises that are zero-mean, white and with easily calculated covariances. These results allow bias estimation as well as the evaluation of the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases in any scenario. Monte Carlo simulation results show that the new method is statistically efficient, i.e., it meets the CRLB. The use of this technique for scale biases in addition to the usual additive (offset) biases is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Major advances in base technologies of computer processors and low cost communications have paved the way for a resurgence of interest in unattended ground sensors. Networks of sensors offer the potential of low cost persistent surveillance capability in any area that the sensor network can be placed. Key to this is the choice of sensor on each node. If the system is to be randomly deployed then non line of sight sensor become a necessity. Acoustic sensors potentially offer the greatest level of capability and will be considered here. In addition, there is a trade off between sensor density and tracking technique that will impact on cost. As a passive sensor, only time of arrival or bearing information can be obtained from an acoustic array, thus the tracking of targets must be done in this domain. This paper explores the critical step between array processing and implementation of the tracking algorithm. Specifically, unlike previous implementations of such a system, the bearings from each frequency interval of interest are not averaged but are used as data points within a Kalman filter. Thus data is not averaged and then filtered but all data is put into the tracking filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management III
Networked data fusion applications require adaptive strategies to
maximise their performance subject to fluctuating resource constraints. If the application is simply picture compilation (i.e., target tracking and identification) then Fisher/Shannon metrics provide a normative basis for approaching this problem. In this paper we demonstrate how information gain can be used to manage a constrained communication bandwidth in a decentralised tracking system that has to adapt to asymmetric communication bandwidth and data delays. When the sensor nodes are active participators in the information acquisition process, the relevance of information must also be considered. Specifically, what is the balance between the cost of information and the expected pay-off resulting from its application in a decision-making process? It is described how issues such as this fit into the formal framework of decentralised partially observed Markov decision process (DEC-POMDP) theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Tracking of a Ground Moving Target (GMTI) is a challenging problem given the environment complexity, the target maneuvers and the false alarm rate. Using the road network information in the tracking process is considered an asset mainly when the target movement is limited to the road. In this paper, we consider different approaches to incorporate the road information into the tracking process: Based on the assumption that the target is following the road network and using a classical estimation technique, the idea is to keep the state estimate on the road by using different “projections” approaches. The first approach is a deterministic one based either on the minimization of the distance between the estimate and its projection on the road or on the minimization of the distance between the measurement and its projection on the road. In this case, the state estimate is updated using the projected measurement. The second approach is a probabilistic one. Given the probability distributions of the measurement error and the state estimate, we propose to use this information in order to maximize the a posteriori measurement probability and the a posteriori estimate probability under the road constraints. This maximization is equivalent to a minimization of the Mahalanobis distance under the same constraints. To differentiate this approach from the deterministic one, we called the projection pseudo projection on the road segment. In this paper, we present a comparative study of the performances of these projection approaches for a simple tracking case. Then we extend the study to the case of road intersections in which we present a sequential ratio test in order to select the best road segment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a practical multi-sensor tracking network, the sensor data
processors communicate only a subset of the data available from
each sensor, usually in the form of tracks due to constraints in
the communications bandwidth. This paper investigates source
coding methods by which track information may be communicated with
varying levels of available bandwidth with the aim of minimizing
the number of bits required by the fusion center for a given
estimation error. In particular, we have formulated the
distributed sensor fusion problem subject to network constraints
as a minimization problem in terms of the Kullback-Leibler
distance (KLD). We have shown that for the multi-sensor track
fusion problem, the global KLD will be minimized when the
individual, local KLDs are minimized. The solutions to some
special cases are derived and the losses in the accuracy of the
target state estimates that result from the process of source
coding and subsequent interpretation is also determined. Simulated
results demonstrate the consistency between both theoretical and
practical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the recently introduced extremal optimization algorithm and apply it to target detection and association problems arising in pre-processing for multi-target tracking. Extremal optimization
is based on the concept of self-organized criticality, and has been used successfully for a wide variety of hard combinatorial optimization problems. It is an approximate local search algorithm that achieves its success by utilizing avalanches of local changes that allow it to explore a large part of the search space. It is somewhat similar to genetic algorithms, but works by selecting and changing bad chromosomes of a bit-representation of a candidate solution. The algorithm is based on processes of self-organization
found in nature. The simplest version of it has no free parameters, while the most widely used and most efficient version has one parameter. For extreme values of this parameter, the methods reduces to hill-climbing and random walk searches, respectively. Here we consider the problem of pre-processing for multiple target tracking when the number of sensor reports received is very large and arrives in large bursts. In this case, it is sometimes necessary to pre-process reports before sending them to tracking modules in the fusion system. The pre-processing step associates reports to known
tracks (or initializes new tracks for reports on objects that have not been seen before). It could also be used as a pre-process step before clustering, e.g., in order to test how many clusters to use. The pre-processing is done by solving an approximate version of the original problem. In this approximation, not all pair-wise conflicts are calculated. The approximation relies on knowing how many such pair-wise conflicts that are necessary to compute. To determine this, results on phase-transitions occurring when coloring (or clustering) large random instances of a particular graph ensemble are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will evaluate one promising method used to solve one of the main problems in electronic warfare. This problem is the identification of radar signals in a tactical environment. The identification process requires two steps: clustering of collected radar pulse descriptor words and the classification of clustered results. The method described here, Fuzzy Adaptive Resonance Theory Map (Fuzzy ARTMAP) is a self-organizing neural network algorithm. The
benefits of this algorithm are that the training process is very stable and fast and that it needs a small number of required initial parameters and it performs very well at novelty detection, which is the classification of unknown radar emitters. This paper will discuss the theory behind the Fuzzy ARTMAP, as well as results of the processing of two `i real^i radar pulse data sets. The first evaluated data set consists of 5242 radar pulse descriptor words from 32 different emitters. The second data set consists of 107850 pulse descriptor words from 112 different emitters. The radar pulse descriptors words that were used by the algorithm for both sets of data were radio frequency (RF) and pulse width (PW). The results of the processing of both of these datasets were better than 90% correct correlation with actual ID, which exceeds the results of processing these datasets with other algorithms such as K-Means and other self-organizing neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method for multi-resolution automatic target recognition is described that employs the hybrid evolutionary algorithm and image transform in a form of image local response. The recognition task is re-formulated as a nonlinear global optimization problem, i.e. the search for a proper transformation A(V) that provides the best match between the images of the target and the scene. Given the images of the scene and the targeted object located in the scene, the proposed method repeatedly applies response analysis on different resolution levels (zooming in), in order to find the region of interest (ROI) containing the target in the large-scale image of the scene. On every resolution level, the response matrices are computed for the images of ROI and the target. Cross correlation of the response matrices built for ROI and the target outlines the potential locations of the latter. Once the locations are successfully identified, the algorithm zooms in on the found locations. The hybrid evolutionary algorithm is applied to the response matrices MR; it attempts to minimize the least squared difference of the pixel values of the response matrices corresponding to the images, thus searching for the correct parameter vector V for the targeted object with the reference to ROI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews a research program aimed at extracting and utilizing object localization information from sequences of visible band and infrared imagery. The techniques are entirely passive and are based on the relative positions of objects and features taken from a pre-prepared scene database. The techniques used in this project are based on existing techniques for navigation by Scene Matching and Area Correlation (SMAC) and have been adapted for the object localisation task. The paper also considers the use of a Multiple Hypothesis Tracking (MHT) system for the automatic tracking of the known ground features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
Sensor management algorithms must be capable of directing sensing resources preferentially to know or potential Targets of Interest (ToIs) having high tactical importance. In principle one could simply wait until accumulated information strongly suggests that particular targets are probable ToIs and then bias the allocation of sensor resources to those targets. However, such ad hoc techniques have inherent limitations. To avoid these limitations target preference must be incorporated into the fundamental statistical description of multisensor-multitarget problems. In this paper we show that finite-set statistics (FISST) has built-in mathematical tools for doing this, thereby allowing target preference to be incorporated into sensor management objective functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor-multitarget sensor management is viewed as a problem in nonlinear control theory. This paper applies newly developed theories for sensor management based on a Bayesian control-theoretic foundation. Finite-Set-Statistics (FISST) and the Bayes recursive filter for the entire multisensor-multitarget system are used
with information-theoretic objective functions in the development of the sensor management algorithms. The theoretical analysis indicate that some of these objective functions lead to potentially tractable sensor management algorithms when used in conjunction with
MHC (multi-hypothesis correlator)-like algorithms. We show examples of such algorithms, and present an evaluation of their performance against multisensor-multitarget scenarios. This sensor management formulation also allows for the incorporation of target preference, and experiments demonstrating the performance of sensor management with target preference will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper continues the investigation of a foundational and yet potentially practical basis for control-theoretic sensor management, using a comprehensive, intuitive, system-level Bayesian paradigm based on finite-set statistics (FISST). In this paper we report our most recent progress, focusing on multistep look-ahead -- i.e., allocation of sensor resources throughout an entire future time-window. We determine future sensor states in the time-window using a "probabilistically natural" sensor management objective function, the posterior expected number of targets (PENT). This objective function is constructed using a new "maxi-PIMS" optimization strategy that hedges against unknowable future observation-collections. PENT is used in conjuction with approximate multitarget filters: the probability hypothesis density (PHD) filter or the multi-hypothesis correlator (MHC) filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss multi-target tracking for a submarine model based on incomplete observations. The submarine model is a weakly interacting stochastic dynamic system with several submarines in the underlying region. Observations are obtained at discrete times from a number of sonobuoys equipped with hydrophones and consist of a nonlinear function of the current locations of submarines corrupted by additive noise. We use filtering methods
to find the best estimation for the locations of the submarines.
Our signal is a measure-valued process, resulting in filtering equations that can not be readily implemented. We develop Markov
chain approximation approach to solve the filtering equation for our model. Our Markov chains are constructed by dividing the multi-target state space into cells, evolving particles in these cells,
and employing a random time change approach. These approximations converge to the unnormalized conditional distribution of the signal process based on the back observations. Finally we present some simulation results by using the refining stochastic grid (REST) filter (developed from our Markov chain approximation method).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this note, we consider the problem of detecting network portscans through the use of anomaly detection. First, we introduce some static tests for analyzing traffic rates. Then, we make use of two dynamic chi-square tests to detect anomalous packets. Further, we model network traffic as a marked point process and introduce a general portscan model. Simulation results for correct detects and false alarms are presented using this portscan model and the statistical tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems
and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ground vehicles can be effectively tracked using a moving target indicator (MTI) radar. However, vehicles whose velocity along the line-of-sight to the radar falls below the minimum detectable velocity (MDV) are not detected. One way targets avoid detection, therefore, is to execute a series of move-stop-move motion cycles. While a target can be acquired after beginning to move again, it may not be recognized as a target previously in track. Particularly for the case of high-value targets, it is imperative that a vehicle be continuously tracked. We present an algorithm for determining the probability that a target has stopped and an estimate of its stopped state (which could be passed to a tasker to schedule a spot synthetic aperature radar (SAR) measurement. We treat a non-detection event as evidence that can be used to update the target state probability density function (PDF). Updating the target state PDF using a non-detection event pushes the probability mass into regions of the state space in which the vehicle is either stopped or traveling at a speed such that the range-rate fails the MDV. The target state PDF updated with the non-detection events is then used to derive an estimate of the stopped target’s location. Updating the target state PDF using a non-detection event is, in general, non-trivial and approximations are required to evaluate the updated PDF. When implemented with a particle filter, however, the updating formula is simple to evaluate and still captures the subtleties of the problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The probability hypothesis density (PHD) filter is a practical alternative to the optimal Bayesian multi-target filter based on
finite set statistics. It propagates only the first order moment
instead of the full multi-target posterior. Recently, a sequential
Monte Carlo (SMC) implementation of PHD filter has been used in
multi-target filtering with promising results. In this paper, we
will compare the performance of the PHD filter with that of the
multiple hypothesis tracking (MHT) that has been widely used in
multi-target filtering over the past decades. The Wasserstein
distance is used as a measure of the multi-target miss distance in
these comparisons. Furthermore, since the PHD filter does not
produce target tracks, for comparison purposes, we investigated
ways of integrating the data-association functionality into the
PHD filter. This has lead us to devise methods for integrating the
PHD filter and the MHT filter for target tracking which exploits
the advantage of both approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ronald Mahler's Probability Hypothesis Density (PHD) provides a promising framework for the passive coherent location of targets
observed via multiple bistatic radar measurements. We consider
tracking targets using only range measurements from a simple
non-directional receiver that exploits non-cooperative FM radio
transmitters as its "illuminators of opportunity." A target
cannot be located at a single point by a particular transmitter-receiver pair, but rather it is located along a bistatic range ellipse determined by the position of the target relative to the receiver and transmitter. Target location is resolved by using multiple transmitter-receiver pairs and locating the target at the intersection of the resulting bistatic ellipses. Determining the intersection of these bistatic range ellipses and resolving the resultant ghost targets is generally a complex task. However, the PHD provides a convenient and simple means of fusing together the multiple range measurements to locate targets. We incorporate signal-to-noise ratios, probabilities of detection and false alarm, and bistatic range variances into our simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Consider the problem of tracking a set of moving targets. Apart from the tracking result, it is often important to know where the tracking fails, either to steer sensors to that part of the state-space, or to inform a human operator about the status and quality of the obtained information. An intuitive quality measure is the correlation between two tracking results based on uncorrelated observations. In the case of Bayesian trackers such a correlation measure could be the Kullback-Leibler difference. We focus on a scenario with a large number of military units moving in some terrain. The units are observed by several types of sensors and
"meta-sensors" with force aggregation capabilities. The sensors register units of different size. Two separate multi-target probability hypothesis density (PHD) particle filters are used to track some type of units (e.g., companies) and their sub-units
(e.g., platoons), respectively, based on observations of units of those sizes. Each observation is used in one filter only. Although the state-space may well be the same in both filters, the posterior
PHD distributions are not directly comparable -- one unit might
correspond to three or four spatially distributed sub-units. Therefore, we introduce a mapping function between distributions for different unit size, based on doctrine knowledge of unit configuration. The mapped distributions can now be compared -- locally or globally -- using some measure, which gives the correlation between two PHD distributions in a bounded volume of the state-space. To locate areas where the tracking fails, a discretized quality map of the state-space can be generated by applying the measure locally to different parts of the space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This algorithm provides a method for non-linear multiple target tracking that does not require association of targets. This is done by recursive Bayesian estimation of the density corresponding to the expected number of targets in each measurable set -- the Probability Hypothesis Density (PHD). Efficient Monte Carlo estimation is achieved by giving this density the role of the single target state probability density in the conventional particle filter. The problem setup for our algorithm includes (1) a bounded region of interest containing a changing number of targets, (2) independent observations each accompanied by estimates of false alarm probability and the probability that the observation represents something new, (3) an estimate of the Poisson rate at which targets leave the region of interest. The prototype application of this filter is to aid in short range acoustic contact detection and alertment for submarine systems. The filter uses as input passive acoustic detections from a fully automated process, which generates a large numbers of valid and false detections. The filter does not require specific target classification. Although the mathematical theory of Probability Hypothesis Density estimation has been developed in the context of modern Random Set Theory, our development relies on elementary methods instead. The principal tools are conditioning on the expected number of targets and identification of the PHD with the density for the proposition that at least one target is present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
A system is described for predicting the location and movement of ground vehicles over road networks using a combination of vehicle motion models, context, and network flow analysis. Preliminary results obtained over simulated ground vehicle movement scenarios demonstrate the ability to accurately predict candidate TCT locations under move-stop-move and other typical vehicle behaviors. Limitations of current models are discussed and extensions proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion allow us to combine information from sensors with different physical characteristics to enhance the understanding of our surroundings and provide the basis for planning and decision-making. Much effort has been made toward the development of building different types of fusion methodologies and architectures. However, it would be desirable if we could estimate the performance of fusion systems before we implement them. This paper presents a performance model to evaluate the multisensor tracking systems where both kinematics and classification components are considered. Secifically, we focus our effort on classification performance prediction by defining the local confusion matrix and global confusion matrix and develop an analytical method to estimate the probability of correct classification over time. Simulation results that support the analytic approaches are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic collection/sensor management (DSM) systems require the ability to plan in advance deployment and use of platforms/sensors to optimally locate and identify time critical and time sensitive ground targets (TST) at some future anticipated time. In order to provide long-term planning, track fusion based initial target kinematic and classification state estimates can be used to initialize long-term location prediction modeling (LPM) algorithms of target movement along lines-of-communications (LOC) networks given knowledge of target and LOC characteristics. In order to optimize the selection of the planned sensor mix for the anticipated target location, a fusion performance model (FPM) can be used to predict the best combination of available platforms/sensors. Given the outcome of long-term prediction and the best kinematic and classification state estimates from the fusion performance model, a ranked set Figures-of-Merits (FOMs) is developed for the DSM system focusing on optimizing target position and classification accuracies. The methodology, development, implementation and open/closed loop simulation concepts for FOMs evaluation are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In bistatic and multistatic systems, a variety of observables may be used to obtain a final target location. The total target location error will depend upon what observables are used to obtain the location, the individual measurement errors, and the viewing geometry. In planning for a bistatic mission, it is advantageous to have a tool for a priori estimation of target location error as a function of all potential sensor and target positions. This paper presents one such tool, based upon a modification of the dilution of precision (DOP) methodology long utilized in GPS navigation and developed herein into a system of equations specifically for use in a bistatic sensor system. This tool is a system of equations which allow easy determination of a figure of merit, the bistatic position dilution of precision (BPDOP). The BPDOP gives a direct, quantitative measure of the target location effectiveness of a given bistatic sensor geometry for any given potential target location. The paper derives the equations for the BPDOP for six observables: the monstatic range, the bistatic range; the monostatic and bistatic azimuth angles; and the monostatic and bistatic depression angles. Several example scenarios are simulated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Air Force is researching the fusion of multiple sensors and
classifiers. Given a finite collection of classifiers to be fused one seeks a new classifier with improved performance. An established performance quantifier is the Receiver Operating Characteristic (ROC) curve. This curve allows one to view the probability of detection versus probability of false alarm in one graph. In reality only finite data is available so only an approximate ROC curve can be constructed. Previous research shows that one does not have to perform an experiment for this new fused classifier to determine its ROC curve. If the ROC curve for each individual classifier has been determined, then formulas for the ROC curve of the fused classifier exist for certain fusion rules. This will be an enormous saving in time and money since the performance of many fused classifiers will be determined without having to perform tests on each one. But, again, these will be approximate ROC curves, since they are based on
finite data. We show that if the individual approximate ROC curves are consistent then the approximate ROC curve for the fused classifier is also consistent under certain circumstances. We give the details for these circumstances, as well as some examples related to sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically, when considering multiple classifiers, researchers assume that they are independent. Under this assumption estimates for the performance of the fused classifiers are easier to obtain and quantify mathematically. But, in fact, classifiers may be correlated, thus, the performance of the fused classifiers will be over-estimated. This paper will address the issue of the dependence between the classifiers to be fused. Specifically, we will assume a level of dependence between two classifiers for a given fusion rule and produce a formula to quantify the performance of this newly fused classifier. The performance of the fused classifiers will then be evaluated via the Receiver Operating Characteristic (ROC) curve. A classifier typically relies on parameters that may vary over a given range. Thus, the probability of true and false positives can be computed over this range of values. The graph of these probabilities over this range then produces the ROC curve. The probability of true positive and false positive from the fused classifiers are developed according to various decision rules. Examples of dependent fused classifiers will be given for various levels of dependency and multiple decision rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent decades, Bayesian Network (BN) has shown its power to solve probabilistic inference problems because of its expressive representation of dependence relationships among random variables and the dramatic development of inference algorithms. They have been applied for decision under uncertainty in many areas such as data fusion, target recognition, and medical diagnosis, etc. In general, the problem of probabilistic inference for a dynamic BN is to compute the posterior probability distribution of a specific variable of interest given a set of observations cumulated over time. The accuracy of the resulting posterior probability distribution is essential since the correct decision in any partially observable environment depends on this distribution. However, there is no general evaluation methodology available to predict the inference performance for a BN other than extensive Monte Carlo simulation methods. In this paper, we first present a method to model the inference performance for a static BN. This approximate method is designed to predict the inference performance analytically without extensive simulation. We then propose a sequential simulation method based on the particle filter concept to evaluate the inference performance for a dynamic BN. The specific model we deal with is the hybrid partial dynamic BN consists of
discrete and continuous variables with arbitrary relationships. Since no exact inference algorithm available for such a model, we use likelihood weighting (LW) method on an unrolled DBN to estimate its true performance bound for comparison with the predicted performance. Comparison and analysis of the experimental results show the potential capability of the sequential simulation method for evaluating the performance of dynamic Bayesian networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the conjunctive and disjunctive combination rules of evidence, namely, the Dempster-Shafer' s (D-S) combination rule, the Yager's combination rule, the Dubois and Prade's (D-P) combination rule, the DSm's combination rule and the disjunctive combination rule, are studied for the two independent sources of information.
The properties of each combination rule of evidence are discussed in detail, such as the role of evidence of each source of information in the combination judgment, the comparison of the combination judgment belief and ignorance of each combination rule, the treatment of conflict judgments given by the two sources of information, and the applications of combination rules. Zadeh' s example is included in the paper to evaluate the performance as
well as efficiency of each combination rule of evidence for the conflict judgments given by the two sources of information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian networks for the static as well as for the dynamic cases have been the subject of a great deal of theoretical analysis and practical inference approximations in the research community of artificial intelligence, machine learning and pattern recognition. After exploring the quite well known theory of discrete and continuous Bayesian networks, we introduce an almost instant reasoning scheme to the hybrid Bayesian networks. In addition to illustrate the similarities of the dynamic Bayesian networks (DBN) and the Kalman filter, we present a computationally efficient approach for the inference problem of hybrid dynamic Bayesian networks (HDBN). The proposed method is based on the separations of the dynamic and static nodes, and following hypercubic partitions via the Decision tree algorithm (DT). Experiments show that with high statistical confidence the novel algorithm used in the HDBN performs favorably in the tradeoffs of computational complexities and accuracy performance when compared to Junction tree and Gaussian mixture models on the task of classifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
High-accuracy, low-ambiguity emitter classification based on ESM signals is critical to the safety and effectiveness of military platforms. Many previous ESM classification techniques involved comparison of either the average observed value or the observed limits of ESM parameters with the expected limits contained in an emitter library. Signal parameters considered typically include radio frequency (RF), pulse repetition interval (PRI), and pulse width (PW). These simple library comparison techniques generally yield ambiguous results because of the high density of emitters in key regions of the parameter space (X-band). This problem is likely to be exacerbated as military platforms are more frequently called upon to conduct operations in littoral waters, where high densities of airborne, sea borne, and land based emitters greatly increase signal clutter. A key deficiency of the simple techniques is that by focusing only on parameter averages or limits, they fail to take advantage of much information contained in the observed signals. In this paper we describe a Dempster-Shafer technique that exploits a set of hierarchical parameter trees to provide a detailed description of signal behavior. This technique provides a significant reduction in ambiguity particularly for agile emitters whose signals provide much information for the algorithm to utilize.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To design information fusion systems, it is important to develop metrics as part of a test and evaluation strategy. In many cases, fusion systems are designed to (1) meet a specific set of user information needs (IN), (2) continuously validate information pedigree and updates, and (3) maintain this performance under changing conditions. A fusion system’s performance is evaluated in many ways. However, developing a consistent set of metrics is important for standardization. For example, many track and identification metrics have been proposed for fusion analysis. To evaluate a complete fusion system performance, level 4 sensor management and level 5 user refinement metrics need to be developed simultaneously to determine whether or not the fusion system is meeting information needs. To describe fusion performance, the fusion community needs to agree on a minimum set of metrics for user assessment and algorithm comparison. We suggest that such a minimum set should include feasible metrics of accuracy, confidence, throughput, timeliness, and cost. These metrics can be computed as confidence (probability), accuracy (error), timeliness (delay), throughput (amount) and cost (dollars). In this paper, we explore an aggregate set of metrics for fusion evaluation and demonstrate with information need metrics for dynamic situation analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A physical demonstration of distributed surveillance and tracking is described. The demonstration environment is an outdoor car park overlooked by a system of four rooftop cameras. The cameras extract moving objects from the scene, and these objects are tracked in a decentralized way, over a real communication network, using the information form of the standard Kalman filter. Each node therefore has timely access to the complete global picture and because there is no single point of failure in the system, it is robust. The demonstration system and its main components are described here,
with an emphasis on some of the lessons we have learned as a result of applying a corpus of distributed data fusion theory and algorithms in practice. Initial results are presented and future plans to scale up the network are also outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a tool that predicts the coverage and performance of sensor networks. Specifically it examines weapon locating radars and acoustic sensors in different terrain and weather conditions. The computer environment and multiple sensor models are presented. Fusion of sensors takes multiple predicted accuracy metrics from the single sensor performance models and combines them to show networked performance. Calculations include Cramer-Rao lower bound computation of the sensors and the fused sensors source location error. Results are presented showing the outputs of the models in the form of sensor accuracy maps superimposed onto terrain maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Seawolf Mid-Life Update (SWMLU) programme is a major upgrade of the UK Royal Navy's principal point defence weapon system. The addition of an Electro-Optic sensor to the pre-existing 'I' and 'K' band radars presents a significant engineering challenge. The processing of the data from the 3 sensors into a fused picture such that a coherent view of which objects represent targets and own missiles is a key element of the overall system design and is critical to achieving the required system performance. Without this coherent view of the detected objects incorrect guidance commands will be issued to the Seawolf missiles resulting in a failure to successfully intercept the target. This paper reviews the sensor data association problem as it relates to the SWMLU system and outlines identified solution strategies. The SWMLU sensors provide complementary data that can be exploited through data association to maximise tracking accuracy as well as maintaining performance under sensor lost-lock conditions. The sensor data association approach utilises radar and EO properties from spatial and temporal domains. These characteristics are discussed in terms of their contribution to the SWMLU data association problem where it will be shown that the use of object attributes from the EO sensor and their behaviour over time is a critical performance factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing.
Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five
cortical stacks in which each stack consists of three vertically
oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce
the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential
architectural prototype for the development of novel
sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the design and ongoing research of a Disaster Recovery System that features uncertainty visualization and distributed computing. Our system, DiRect, is intended for use by firefighters, policemen, and rescue personnel to aid in disaster recovery missions. These missions can run the gamut from natural disasters like earthquakes and fire to situations like 9/11. The use of visuals and a distributed communication infrastructure will help rescuers make better informed, more accurate and quicker decisions. In this paper, we discuss the need for such a system as well as the resulting infrastructure architecture we have developed. We further describe how information is collected using ICS (Incident Command System) protocol and is stored and fused. Finally we describe how this information will be visualized by the system. A goal of DiRect is to capture the uncertainties of the information and visually present this for more informed decisions. Current results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper two different methods for fusing data from optical and active radar sensors are studied. The first method fuses data prior to feature extraction and the second method fuses data, in a more traditional way, after feature extraction. The advantage of
fusing before feature extraction is that no information is lost prior to the fusion. The sensor data share one common dimension, namely azimuth, but the radar suffers from lower resolution. The algorithms are tested on real measurements from Ku- and millimeter wave radar combined with infrared or TV-camera. The study is in its initial phase and the two methods studied are simple in nature. The study aims to reveal differences between a raw data method and a feature-based method and should later result in a more complex and robust method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A statistical approach to distributed edge detection in wireless sensor networks is proposed in this research. Rather than trying to estimate the real edge of a phenomenon, the technique of edge sensor detection is adopted to determine whether a target sensor is located within the edge area or not based on information collected from its neighboring sensors. Due to the nature of wireless sensor networks, it is desirable to have a distributed algorithm that has a low computational complexity and a low data communication cost among sensors. With some reasonable assumptions and the aid of composite hypothesis testing, we propose data fusion as well as decision fusion methods for edge sensor detection to fulfill the aforementioned constraints. Numerical experiments are used to demonstrate the efficiency of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the Bayesian Data Reduction Algorithm (BDRA) is extended to model uncertainty in feature information. The new method works by making a data observation spread over multiple discretized bins in a way that is analogous to convolving it with a blur function prior to being quantized. The motivation for this is to enforce some notion of closeness between discretized bins that actually are close prior to quantization -- the original model incorporates no such notion, and in fact the number of observed training data in one bin bears no stronger relationship to a bin that is near than one that is distant, in the sense of the underlying unquantized data. This has the effect that performance of the BDRA can be improved in difficult classification situations involving very small numbers of training data. The BDRA is based on the assumption that the discrete symbol probabilities of each class are a priori uniformly Dirichlet distributed, and it employs a "greedy" approach (similar to a backward sequential feature search) for reducing irrelevant features from the training data of each class. Notice that reducing irrelevant features is synonymous here with selecting those features that provide best classification performance; the metric for making data reducing decisions is an analytic formula for the probability of error conditioned on the training data. To illustrate its performance with the new extended algorithm results will be shown using both real and simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eigenvalue based corner detection is known to be effective in detecting corners of objects in noise. In this paper, a corner detection technique based on including the orientation and the angle of the corner in addition to its eigenvalue is introduced. It is shown that both orientation and corner information improve the detectability of corners. Moreover, corners that have been selected via the new technique are more likely to be detected in subsequent frames and therefore improve the performance of an object tracker. This modification only adds a minor computational load to our tracking schemé. Real and synthetic images are used to evaluate the detection performance as well as their effect on tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust, real-time, user-friendly, non-restrictive and fully automatic natural like human computer interfaces are required to move away from the present machine-empowered-technologies to future human-empowered-technologies (HET). As one of HET interface technologies, this paper presents a cost-effective stereo face detection and tracking of facial features for determining facial pose. The object features are extracted using max-median filters and a progressive threshold algorithm, the face is verified on 'prominent feature configuration template.' Once face is confirmed, the features are tracked using dynamic programming filter. The results are impressive. Video clips would be shown during presentation in the symposium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To recognize the targets state in a battlefield after shots, it's possible to send a video sensor with the same ballistics than real ammunitions. This article describes the modeling of observation ammunition using simulation workshop CHORALE and AMOCO. Before real trials on a battlefield and to define the final features of an embedded video sensor, it is necessary to modeling each part of the behavior of the ammunition. The goal of this simulation is to know how the ammunition with its movements as the rotation, pitch, yaw can see the proving ground and what values of parameters it is possible to set into the embedded sensor. To perform the simulation, we need of a ballistics model taking into account the geometric form of the projectile. This ammunition flights over a battlefield so we need a realistic 3D scene model taking into account the atmospheric conditions, real target, geometric target and all existing things on a real proving ground as building, forest and so on. The frame seen by the sensor is modified by its transfer function. The video sensor model takes into account lot of functional parameters as Optic, Detector and electronic function. An algorithm takes place at the incoming step in the sensor model and takes into account the rotation during the integration time. Each model calculates independently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The health of a sensor and system is monitored by information gathered from the sensor. A normal mode of operation is established. Any deviation from the normal behavior indicates a change. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). Then each point of the signal is selected with a window of trailing data collected previously. Two trailing window lengths are selected; one equal to two time constant of the main process and the other equal to two time constant of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features, the signal power, peak, and spectral area are plotted vs. time. The results indicate distinct shapes corresponding to each process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Levinson-Durbin (LD) algorithm has been used for decades as an alternative to Fast-Fourier Transforms (FFTs) in cases where several cycles of a signal are not available or too expensive to obtain. We describe a new application of this LD algorithm using spectral estimation to locate a magnetic dipole, such as a submarine or magnetic mine, relative to a high-sensitivity probe (i.e. gradiometer/magnetometer sensor) moving through the magnetic field. The weakness of the FFT is assuming periodic inputs, thus when the sample ends at a different level than the input, the FFT incorrectly inserts a step at the 'break' between cycles; the LD algorithm benefits by assuming that nothing outside the sampling window will change the spectrum. The iterative LD algorithm is also well suited for real-time operations since it can be solved continuously while the probe moves toward the subject. By establishing spectral templates for different measurement paths relative to the source dipole, we use correlation in the spectral domain to estimate the distance of the dipole from our current path. Direction, and thus location, is obtained by simultaneously sending a second probe to complement the information gained by the first probe, together with a multidimensional LD algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invited Panel Discussion: Unsolved, Difficult, and Misunderstood Problems/Approaches in Fusion Research
This report describes several information processing configurations for hierarchical tracking, their performances and discusses the selection of one of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.