PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7336, including the Title Page, Copyright information, Tabe of Contents, Introduction (if any), Conference Committee listing, and Panel Session papers and slides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
We study 17 distinct methods to approximate the gradient of the log-homotopy for nonlinear filters.
This is a challenging problem because the data are given as function values at random points in high
dimensional space. This general problem is important in optimization, financial engineering,
quantum chemistry, chemistry, physics and engineering. The best general method that we have
developed so far uses a simple idea borrowed from geology combined with a fast approximate k-NN
algorithm. Extensive numerical experiments for five classes of problems shows that we get
excellent performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We solve the fundamental and well known problem in particle filters, namely "particle collapse" or
"particle degeneracy" as a result of Bayes' rule. We do not resample, and we do not use any
proposal density; this is a radical departure from other particle filters. The new filter implements
Bayes' rule using particle flow rather than with a pointwise multiplication of two functions. We
show numerical results for a new filter that is vastly superior to the classic particle filter and the
extended Kalman filter. In particular, the computational complexity of the new filter is many orders
of magnitude less than the classic particle filter with optimal estimation accuracy for problems with
dimension greater than 4. Moreover, our new filter is two orders of magnitude more accurate than
the extended Kalman filter for quadratic and cubic measurement nonlinearities. We also show
excellent accuracy for problems with multimodal densities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a target tracking system that provides improved estimates of target states using target
orientation information in addition to standard kinematic measurements. The objective is to improve state
estimation of highly maneuverable targets with noisy kinematic measurements. One limiting factor in obtaining
accurate state estimates of highly maneuvering targets is the high level of uncertainty in velocity and acceleration.
The target orientation information is helpful in alleviating this problem to accurately determine the velocity and
acceleration components. However, there is no sensor that explicitly measures target orientation. In this paper,
the Observable Operator Model (OOM) is used together with multiple sensor information to estimate target
orientation measurement. This is done by processing the sensor feature measurements from different aspect
angles and the estimated target orientation measurement is used in conjunction with kinematic measurements
to conclusively estimate target states. Simulation results show that the incorporation of target orientation can
enhance the tracking performance in the presence of fast moving and/or maneuvering targets. In addition, the
Posterior Cramer-Rao lower bound (PCRLB) that quantifies the achievable performance is derived. It is shown
that the proposed estimator meets the PCRLB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important component of tracking fusion systems is the ability to fuse various sensors into a coherent picture of the
scene. When multiple sensor systems are being used in an operational setting, the types of data vary. A significant but
often overlooked concern of multiple sensors is the incorporation of measurements that are unobservable. An
unobservable measurement is one that may provide information about the state, but cannot recreate a full target state. A
line of bearing measurement, for example, cannot provide complete position information. Often, such measurements
come from passive sensors such as a passive sonar array or an electronic surveillance measure (ESM) system.
Unobservable measurements will, over time, result in the measurement uncertainty to grow without bound. While some
tracking implementations have triggers to protect against the detrimental effects, many maneuver tracking algorithms
avoid discussing this implementation issue.
One maneuver tracking technique is the neural extended Kalman filter (NEKF). The NEKF is an adaptive estimation
algorithm that estimates the target track as it trains a neural network on line to reduce the error between the a priori target
motion model and the actual target dynamics. The weights of neural network are trained in a similar method to the state
estimation/parameter estimation Kalman filter techniques. The NEKF has been shown to improve target tracking
accuracy through maneuvers and has been use to predict target behavior using the new model that consists of the a priori
model and the neural network.
The key to the on-line adaptation of the NEKF is the fact that the neural network is trained using the same residuals as
the Kalman filter for the tracker. The neural network weights are treated as augmented states to the target track.
Through the state-coupling function, the weights are coupled to the target states. Thus, if the measurements cause the
states of the target track to be unobservable, then the weights of the neural network have unobservable modes as well. In
recent analysis, the NEKF was shown to have a significantly larger growth in the eigenvalues of the error covariance
matrix than the standard EKF tracker when the measurements were purely bearings-only. This caused detrimental
effects to the ability of the NEKF to model the target dynamics. In this work, the analysis is expanded to determine the
detrimental effects of bearings-only measurements of various uncertainties on the performance of the NEKF when these
unobservable measurements are interlaced with completely observable measurements. This analysis provides the ability
to put implementation limitations on the NEKF when bearings-only sensors are present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To enhance target-tracking accuracy during maneuvers, we develop (1) a post-update compensation (PUC) method to
contain the maneuvering errors and (2) a maneuver indicator to signify the start and end of a maneuver. Tracking of a
maneuvering target is formulated as post-update compensation (PUC), in which a non-maneuvering tracker such as the
α-β filter is allowed to propagate and update its estimates based on the innovations (defined as the difference between a
measurement and its prediction/a priori) without maneuver consideration. Maneuver-induced errors are then removed
from the state updates/a posteriori, yielding compensated estimates based on the residuals (defined as the difference
between a measurement and the one generated from the state update). This post-update compensation (PUC) scheme is
equivalent to Dale Blair's [3] two-stage estimator but simpler in formulation. Simulation results are presented to
illustrate the PUC scheme with error analysis as well as implications of the enhanced tracking methods to increase track
life, reduce location errors of maneuvering targets, and techniques for sensor management as to when to schedule an
observation for target identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently considerable research has been undertaken into estimating the quality of information (QoI) delivered by
military sensor networks. QoI essentially estimates the probability that the information available from the network is
correct. Knowledge of the QoI would clearly be of great use to decision makers using a network. An important class of
sensors, that provide inputs to networks in real-life, are concerned with target tracking. Assessing the tracking
performance of these sensors is an essential component in estimating the QoI of the whole network.
We have investigated three potential QoI metrics for estimating the dynamic target tracking performance of systems
based on some state estimation algorithms. We have tested them on different scenarios with varying degrees of tracking
difficulty. We performed experiments on simulated data so that we have a ground truth against which to assess the
performance of each metric. Our measure of ground truth is the Euclidean distance between the estimated position and
the true position. Recently researchers have suggested using the entropy of the covariance matrix as a metric of QoI
[1][2]. Two of our metrics were based on this approach, the first being the entropy of the co-variance matrix relative to
an ideal distribution, and the second is the information gain at each update of the covariance matrix. The third metric was
calculated by smoothing the residual likelihood value at each new measurement point, similar to the model update
likelihood function in an IMM filter.
Our experiment results show that reliable QoI metrics cannot be formulated by using solely the covariance matrices. In
other words it is possible that a covariance matrix can have high information content, while the position estimate is
wrong. On the other hand the smoothed residual likelihood does correlate well with tracking performance, and can be
measured without knowledge of the true target position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
Any implementation of a real-time tracking system is subject to capacity constraints in terms of how many
measurements or tracks that can be processed per unit time. This paper addresses the problem of selecting which
measurements or tracks should be discarded to maximize the expected number of targets of interest being both tracked
and correctly associated with remote datasets. In particular, the problem is addressed when only a single dataset is
available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluating the effectiveness of fusion systems in a multi-sensor, multi-platform environment has historically been a
tedious and time-consuming process. Typically it has been necessary to perform data collection and analysis in different
code baselines, which requires error-prone data format conversions and manual spatial-temporal data registration. The
Metrics Assessment System (MAS) has been developed to provide automated, real-time metrics calculation and display.
MAS presents metrics in tables, graphs, and overlays within a tactical display. Comparative assessments are based on
truth tracks, including position, velocity, and classification information. The system provides tabular history drill-down
for each metric and each track. MAS, which is currently being evaluated on anti-submarine warfare scenarios, can be a
valuable tool both in objective evaluation performance of tracking and fusion algorithms and in identifying asset and
target interactions that cause the fused tracks to generate from the true ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor applications rely on effectively managing sensor resources. In particular, next-generation multifunctional
agile radars demand innovative resource management techniques to achieve a common sensing goal
while satisfying resource constraints. We consider an active sensing platform where multiple waveform-agile
radars scan a hostile surveillance area for targets. A central controller adaptively selects which transmitters
should be active and which waveforms should be transmitted. The controller's goal is to choose the sequence
of (transmitter, waveform) pairs that yields the most accurate tracking estimate. We formulate this problem as
a partially observable Markov decision process (POMDP), and propose a novel "two-level" scheduling scheme
that uses two distinct schedulers: (1) at the lower level, a myopic waveform scheduler; and (2) at the upper
level, a non-myopic transmitter scheduler. Scheduling decisions at these two levels are carried out differently.
While waveforms are updated at every radar scan, a new set of transmitters only becomes active if the overall
tracking accuracy falls below a given threshold, or if the "detection risk" is exceeded, given by a limit on the
number of consecutive scans during which a set of transmitters is active. By simultaneously exploiting myopic
and non-myopic scheduling schemes, we benefit from trading off short-term for long-term performance, while
maintaining low computational costs. Moreover, in certain situations, the myopic scheduling of waveforms at
each radar scan improves on non-myopic actions taken in the past. Monte Carlo simulations are used to evaluate
the performance of the proposed adaptive sensing scheme in a multitarget tracking setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dynamic path-planning algorithm is proposed for UAV tracking. Based on tangent lines between two dynamic UAV
turning and objective circles, analytical optimal path is derived with UAV operational constraints given a target
position and the current UAV dynamic state. In this paper, we first illustrate that path planning for UAV tracking a
ground target can be formulated as an optimal control problem consisting of a system dynamic, a set of boundary
conditions, control constraints and a cost criterion. Then we derive close form solution to initiate dynamic tangent lines
between UAV turning limit circle and an objective circle, which is a desired orbit pattern over a target. Basic tracking
strategies are illustrated to find the optimal path for UAV tracking. Particle filter method is applied as a target is
moving on a defined road network. Obstacle avoidance strategies are also addressed. With the help of computer
simulations, we showed that the algorithm provides an efficient and effective tracking performance in various
scenarios, including a target moving according to waypoints (time-based and/or speed-based) or a random kinematics
model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many algorithms may be applied to solve the target tracking problem, including the Kalman Filter and different types of
nonlinear filters, such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Particle Filter (PF).
This paper describes an intelligent algorithm that was developed to elegantly select the appropriate filtering technique
depending on the problem and the scenario, based upon a sliding window of the Normalized Innovation Squared (NIS).
This technique shows promise for the single target, single radar tracking problem domain. Future work is planned to
expand the use of this technique to multiple targets and multiple sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential
and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas
for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A
less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it
has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete.
It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly
superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate
that it is these claims which are erroneous. The companion paper introduces formulas for the actual "general
multisensor intensity filter." In this paper I demonstrate that (1) the "general multisensor intensity filter" fails
in important special cases; (2) it will perform badly in even the easiest multitarget tracking problems; and (3)
these rather serious missteps suggest that the "Poisson-intensity approach" is inherently faulty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theoretical foundation for the probability hypothesis density (PHD) filter is the FISST multitarget differential
and integral calculus. The "core" PHD filter presumes a single sensor. Theoretically rigorous formulas
for the multisensor PHD filter can be derived using the FISST calculus, but are computationally intractable. A
less theoretically desirable solution-the iterated-corrector approximation-must be used instead. Recently, it
has been argued that an "elementary" methodology, the "Poisson-intensity approach," renders FISST obsolete.
It has further been claimed that the iterated-corrector approximation is suspect, and in its place an allegedly
superior "general multisensor intensity filter" has been proposed. In this and a companion paper I demonstrate
that it is these claims which are erroneous. This paper introduces formulas for the actual "general multisensor
intensity filter." In the companion paper I demonstrate that the "general multisensor intensity filter" will perform
badly in even the easiest multitarget tracking problems; and argue that this suggests that the "Poisson-intensity
approach" is inherently faulty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Probability Hypothesis Density (PHD) filter is a computationally tractable alternative to the optimal nonlinear
filter. The PHD filter propagates the first moment instead of the full posterior density. Evaluation of
the PHD enables one to extract the number of targets as well as their individual states from noisy data with
data association uncertainties. Recently, a smoothing algorithm was proposed by the authors to improve the
capability of PHD based tracking. Smoothing produces delayed estimates, which yield better estimates not only
for the target states but also for the unknown number of targets. However, in the case of the maneuvering target
tracking problem, this single model method may not provide accurate estimates. In this paper, a multiple model
PHD smoothing method is proposed to improve the tracking of multiple maneuvering targets. A fast sequential
Monte Carlo implementation for a special case is also provided. Simulations are performed with the proposed
method consisting of multiple maneuvering targets. Simulation results confirm the improved performance of the
proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-object Bayes filter analogous to the single-object Bayes filter can be derived using Finite Set Statistics for
the estimation of an unknown and randomly varying number of target states from random sets of observations.
The joint target-detection and tracking (JoTT) filter is a truncated version of the multi-object Bayes filter for
the single target detection and tracking problem. Despite the success of Finite-Set Statistics for multi-object
Bayesian filtering, the problem of multi-object smoothing with Finite Set Statistics has yet to be addressed. I
propose multi-object Bayes versions of the forward-backward and two-filter smoothers and derive optimal non-linear
forward-backward and two-filter smoothers for jointly detecting, estimating and tracking a single target
in cluttered environments. I also derive optimal Probability Hypothesis Density (PHD) smoothers, restricted to
a maximum of one target and show that these are equivalent to their Bayes filter counterparts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
We further develop our previous work on sensor management of disparate and dispersed sensors for tracking
geosynchronous satellites presented last year at this conference by extending the approach to a network of Space Based
Visible (SBV) type sensors on board LEO platforms. We demonstrate novel multisensor-multiobject algorithms which
account for complex space conditions such as the phase angles and Earth occlusions. Phase angles are determined by
the relative orientation of the sun, the SBV sensor, and the object, and play an important factor in determining the
probability of detection for the objects. To optimally and simultaneously track multiple geosynchronous satellites, our
tracking algorithms are based on the Probability Hypothesis Density (PHD) approximation of multiobject densities,
its regularized particle filter implementations (regularized PHD-PF), and a sensor management objective function, the
Posterior Expected Number of Objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimal sensor management of dispersed and disparate sensors for tracking Low Earth Orbit (LEO) objects
presents a daunting theoretical and practical challenge since it requires the optimal utilization of different types of
sensors and platforms that include Ground Based Radars (GBRs) positioned throughout the globe, and the Space
Based Visible (SBV) sensor on board LEO platforms. We derive and demonstrate new computationally efficient algorithms
for multisensor-multiobject tracking of LEO objects. The algorithms are based on the Posterior Expected
Number of Objects as the sensor management objective function, observation models for the sensors/platforms, and
the Probability Hypothesis Density Particle Filter (PHD-PF) tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a formal approach for mapping from an entity-relationship model of a selected application domain
to the functional components of the JDL fusion model. The resultant functional decomposition supports both
traditional sensor, as well as human-generated text input. To demonstrate the generality of the mapping, examples are
offered for three distinct application domains: (1) Intelligence Fusion, (2) Aircraft Collision Avoidance, and (3)
Robotic Control. The first-principle's based approach begins by viewing fusion as the composition of similar and
dissimilar entities. Next, the fusion triple (entity, location, time) is defined where entities can be either physical or
non-physical. Coupling the fusion triple with this generalized view of fusion leads to the identification of eight base-level
fusion services that serve as the building blocks of individual composition products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Receiver operating characteristic (ROC) curves provide a means to
evaluate the performance of ATR systems. Of specific interest, is
the ability to evaluate the performance of fused ATR systems in
order to gain information on how well the combined system performs
with respect to, for instance, single systems, other fusion
methods, or pre-specified performance criteria. Although various
ROC curves for fused systems have been demonstrated by many
researchers, information regarding the bounds for these curves has
not been examined thoroughly. This paper seeks to describe several
bounds that exist on ROC curves from fused, correlated ATR
systems. These bounds include the lower bound for systems fused
using Boolean rules and bounds based on a measure of the variance
of the ATR system. Examples using simulated ROC curves generated
from correlated and uncorrelated ATR systems will be given as well
as a discussion of how correlation affects these bounds. Examining
such bounds for a set of candidate fusion rules a priori can focus
efforts towards those fused systems that better meet specified ATR
system performance criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
n Automatic Target Recognition (ATR) system with N possible output labels (or decisions) will have N(N − 1) possible
errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all of these errors. When multiple
ATR systems are fused, the assumption of independence is usually made in order to mathematically combine the individual
ROC manifolds for each system into one ROC manifold. This paper will investigate the label fusion (also called decision
fusion) of multiple classification systems that have the same number of output labels. Boolean rules do not exist for multiple
symbols, thus, we will derive possible Boolean-like rules as well as other rules that will yield label fusion rules. The
formula for the resultant ROC manifold of the fused classification systems which incorporates the individual classification
systems will be derived. Specifically, given a fusion rule and two classification systems, the ROC manifold for the fused
system is produced. We generate formulas for the Boolean-like OR rule, Boolean-like AND rule, and other rules and give
the resultant ROC manifold for the fused system. Examples will be given that demonstrate how each formula is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the problem of non-myopic multiple platform trajectory control in a multiple target search and
track setting. It presents a centralized receding discrete time horizon controller (RHC) with variable-step look-ahead for
motion planning of a heterogeneous ensemble of airborne sensor platforms. The controller operates in a closed feedback
loop with a Multiple Hypothesis Tracker (MHT) that fuses the disparate sensor data to produce target declarations and
state estimates. The RHC action space for each air vehicle is represented via maneuver automaton with simple motion
primitives. The reward function is based on expected Fisher information gain and priority scaling of target tracks and
ground regions. A customized Particle Swarm Optimizer (PSO) is developed to handle the resulting non-Markovian,
time-varying, multi-modal, and discontinuous reward function. The algorithms were evaluated by simulating ground
surveillance scenarios using representative sensors with varying fields of view and typical target densities and motion
profiles. Simulation results show improved aggregate target detection, track accuracy, and track maintenance for closed-loop
operation as compared with typical open-loop surveillance plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military tanks, cargo or troop carriers, missile carriers or rocket launchers often hide themselves from detection in
the forests. This plagues the detection problem of locating these hidden targets. An electro-optic camera mounted
on a surveillance aircraft or unmanned aerial vehicle is used to capture the images of the forests with possible
hidden targets, e.g., rocket launchers. We consider random forests of longitudinal and latitudinal correlations.
Specifically, foliage coverage is encoded with a binary representation (i.e., foliage or no foliage), and is correlated in
adjacent regions. We address the detection problem of camouflaged targets hidden in random forests by building
memory into the observations. In particular, we propose an efficient algorithm to generate random forests,
ground, and camouflage of hidden targets with two dimensional correlations. The observations are a sequence
of snapshots consisting of foliage-obscured ground or target. Theoretically, detection is possible because there
are subtle differences in the correlations of the ground and camouflage of the rocket launcher. However, these
differences are well beyond human perception. To detect the presence of hidden targets automatically, we develop
a Markov representation for these sequences and modify the classical filtering equations to allow the Markov
chain observation. Particle filters are used to estimate the position of the targets in combination with a novel
random weighting technique. Furthermore, we give positive proof-of-concept simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
Structured pedigree is a way to compress pedigree information. When applied to distributed fusion systems, the
approach avoids the well known problem of information double counting resulting from ignoring the cross-correlation
among fused estimates. Other schemes that attempt to compute optimal fused estimates require the transmission of full
pedigree information or raw data. This usually can not be implemented in practical systems because of the enormous
requirements in communications bandwidth. The Structured Pedigree approach achieves data compression by
maintaining multiple covariance matrices, one for each uncorrelated source in the network. These covariance matrices
are transmitted by each node along with the state estimate. This represents a significant compression when compared to
full pedigree schemes. The transmission of these covariance matrices (or a subset of these covariance matrices) allows
for an efficient fusion of the estimates, while avoiding information double counting and guaranteeing consistency on the
estimates. This is achieved by exploiting the additional partial knowledge on the correlation of the estimates. The
approach uses a generalized version of the Split Covariance Intersection algorithm that applies to multiple estimates and
multiple uncorrelated sources. In this paper we study the performance of the proposed distributed fusion system by
analyzing a simple but instructive example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bio terrorism can be a very refined and a catastrophic approach of attacking a nation. This requires the development of a
complete architecture dedicatedly designed for this purpose which includes but is not limited to Sensing/Detection,
Tracking and Fusion, Communication, and others. In this paper we focus on one such architecture and evaluate its
performance. Various sensors for this specific purpose have been studied. The accent has been on use of Distributed
systems such as ad-hoc networks and on application of epidemic data fusion algorithms to better manage the bio threat
data. The emphasis has been on understanding the performance characteristics of these algorithms under diversified real
time scenarios which are implemented through extensive JAVA based simulations. Through comparative studies on
communication and fusion the performance of channel filter algorithm for the purpose of biological sensor data fusion
are validated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications IV
In this paper, one cost function for setting optimal geometries of multiple sensors' locations is proposed, and related
theorems for optimal geometries of multiple sensors' locations are obtained. In order to keep the sensors in optimal
deployment for moving target, a self-adjusting method is figured out, and an AOA based optimal sensors' locations
self-adjusting and moving target's location estimation algorithms are developed. In order to check the efficiency of the
new algorithms, some simulation results are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on
ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a
market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf
system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield
highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance
characteristics is established. A low-complexity but accurate sensor fusion and localization technique is
then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered
most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is
subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed
fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most
flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and
availability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we analyze the performance of several approaches to sniper localization in a network of mobile sensors.
Mobility increases the complexity of calibration (i.e., self localization, orientation, and time synchronization) in
a network of sensors. The sniper localization approaches studied here rely on time-difference of arrival (TDOA)
measurements of the muzzle blast and shock wave from multiple, distributed single-sensor nodes. Although
these approaches eliminate the need for self-orienting, node position calibration and time synchronization are
still persistent problems. We analyze the influence of geometry and the sensitivity to time synchronization and
node location uncertainties. We provide a Cramer-Rao bound (CRB) for location and bullet trajectory estimator
errors for each respective approach. When the TDOA is taken as the difference between the muzzle blast and
shock wave arrival times, the resulting localization performance is independent of time synchronization and is
less affected by geometry compared to other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications V
Performancemeasures for families of classification system families that rely upon the analysis of receiver operating
characteristics (ROCs), such as area under the ROC curve (AUC), often fail to fully address the issue of risk,
especially for classification systems involving more than two classes. For the general case, we denote matrices
of class prevalence, costs, and class-conditional probabilities, and assume costs are subjectively fixed, acceptable
estimates for expected values of class-conditional probabilities exist, and mutual independence between a variable
in one such matrix and those of any other matrix. The ROC Risk Functional (RRF), valid for any finite number
of classes, has an associated parameter argument, which specifies a member of a family of classification systems,
and for which there is an associated classification system minimizing Bayes risk over the family. We typify
joint distributions for class prevalences over standard simplices by means of uniform and beta distributions, and
create a family of classification systems using actual data, testing independence assumptions under two such
class prevalence distributions. Examples are given where the risk is minimized under two different sets of costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Receiver Operating Characteristic (ROC) curve is typically used to quantify the performance of Automatic Target Recognition (ATR) systems. When multiple systems are to be fused, assumptions are made in order to mathematically combine the individual ROC curves for each of these ATR systems in order to form the ROC curve of the fused system. Often, one of these assumptions is independence between the systems. However, correlation may exist between the classifiers, processors, sensors and the outcomes used to generate each ROC curve. This paper will demonstrate a method for creating a ROC curve of the fused systems which incorporates the correlation that exists between the individual systems. Specifically, we will use the derived covariance matrix between multiple systems to compute the existing correlation and level of dependence between pairs of systems. The ROC curve for the fused system is then produced, adjusting for this level of dependency, using a given fusion rule. We generate the formula for the Boolean OR and AND rules, giving the exact ROC curve for the fused system, that is, not a bound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward
equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact
fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that
requires only the computation of elementary transcendental functions and standard linear algebra techniques-
no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary,
discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are
included, demonstrating its utility in practical implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many radar tracking problems involve a subset of range, angle, and Doppler measurements. A naïve grid-based
computational approach is computationally unfeasible. In fact, even an approach based on the use of
sparse tensors and a fixed, Cartesian grid is computationally very expensive. It is shown that an adaptive grid
that is partly based on measurements reduces the computational load drastically so that the filtered solution
is obtainable in real-time. A numerical example is included to demonstrate the reliable and highly accurate
solutions obtainable using the proposed combination of measurement-based adaptive grid and sparse tensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications VI
In this effort we acquired and registered a multi-spectral dynamic image test set with the intent of using the imagery to
assess the operational effectiveness of static and dynamic image fusion techniques for a range of relevant military tasks.
This paper describes the image acquisition methodology, the planned human visual performance task approach, the
lessons learned during image acquisition and the plans for a future, improved image set, resolution assessment
methodology and human visual performance task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-sensor management for data fusion in target tracking concerns issues of sensor assignment and scheduling by
managing or coordinating the use of multiple sensor resources. Since a centralized sensor management technique has a
crucial limitation in that the failure of the central node would cause whole system failure, a decentralized sensor
management (DSM) scheme is increasingly important in modern multi-sensor systems. DSM is afforded in modern
systems through increased bandwidth, wireless communication, and enhanced power. However, protocols for system
control are needed to management device access. As game theory offers learning models for distributed allocations of
surveillance resources and provides mechanisms to handle the uncertainty of surveillance area, we propose an agent-based
negotiable game theoretic approach for decentralized sensor management (ANGADS). With the decentralized
sensor management scheme, sensor assignment occurs locally, and there is no central node and thus reduces the risk of
whole-system failure. Simulation results for a multi-sensor target-tracking scenario demonstrate the applicability of the
proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many real-world situations and applications that involve humans or machines (e.g., situation awareness, scene
understanding, driver distraction, workload reduction, assembly, robotics, etc.) multiple sensory modalities (e.g., vision,
auditory, touch, etc.) are used. The incoming sensory information can overwhelm processing capabilities of both humans
and machines. An approach for estimating what is most important in our sensory environment (bottom-up or goal-driven)
and using that as a basis for workload reduction or taking an action could be of great benefit in applications
involving humans, machines or human-machine interactions. In this paper, we describe a novel approach for determining
high saliency stimuli in multi-modal sensory environments, e.g., vision, sound, touch, etc. In such environments, the high
saliency stimuli could be a visual object, a sound source, a touch event, etc. The high saliency stimuli are important and
should be attended to from perception, cognition or/and action perspective. The system accomplishes this by the fusion
of saliency maps from multiple sensory modalities (e.g., visual and auditory) into a single, fused multimodal saliency
map that is represented in a common, higher-level coordinate system. This paper describes the computational model and
method for generating multi-modal or fused saliency map. The fused saliency map can be used to determine primary and
secondary foci of attention as well as for active control of a hardware/device. Such a computational model of fused
saliency map would be immensely useful for a machine-based or robot-based application in a multi-sensory
environment. We describe the approach, system and present preliminary results on a real-robotic platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern sensors have a range of modalities including SAR, EO, and IR. Registration of multimodal imagery from such
sensors is becoming an increasingly common pre-processing step for various image exploitation activities such as image
fusion for ATR. Over the past decades, several approaches to multisensor image registration have been developed.
However, performance of these image registration algorithms is highly dependent on scene content and sensor operating
conditions, with no single algorithm working well across the entire operating conditions space. To address this problem,
in this paper we present an approach for dynamic selection of an appropriate registration algorithm, tuned to the scene
content and feature manifestation of the imagery under consideration. We consider feature-based registration using
Harris corners, Canny edge detection, and CFAR features, as well as pixel-based registration using cross-correlation and
mutual information. We develop an approach for selecting the optimal combination of algorithms to use in the dynamic
selection algorithm. We define a performance measure which balances contributions from convergence redundancy and
convergence coverage components calculated over sample imagery, and optimize the measure to define an optimal
algorithm set. We present numerical results demonstrating the improvement in registration performance through use of
the dynamic algorithm selection approach over results generated through use of a fixed registration algorithm approach.
The results provide registration convergence probabilities for geo-registering test SAR imagery against associated EO
reference imagery. We present convergence results for various match score normalizations used in the dynamic selection
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of real-time image geo-referencing is encountered in all vision based cognitive systems. In this
paper we present a model-image feedback approach to this problem and show how it can be applied to image
exploitation from Unmanned Arial Vehicle (UAV) vision systems. By calculating reference images from a known terrain
database, using a novel ray trace algorithm, we are able to eliminate foreshortening, elevation, and lighting distortions,
introduce registration aids and reduce the geo-referencing problem to a linear transformation search over the two
dimensional image space. A method for shadow calculation that maintains real-time performance is also presented.
The paper then discusses the implementation of our model-image feedback approach in the Perspective View
Nascent Technology (PVNT) software package and provides sample results from UAV mission control and target
mensuration experiments conducted at China Lake and Camp Roberts, California.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We extend recent automated computer vision algorithms to reconstruct the
global three-dimensional structures for photos and videos shot at fixed
points in outdoor city environments. Mosaics of digital stills and
embedded videos are georegistered by matching a few of their 2D features
with 3D counterparts in aerial ladar imagery. Once image planes are
aligned with world maps, abstract urban knowledge can propagate from the
latter into the former. We project geotagged annotations from a 3D map
into a 2D video stream and demonstrate their tracking buildings and streets
in a clip with significant panning motion. We also present an interactive
tool which enables users to select city features of interest in video
frames and retrieve their geocoordinates and ranges. Implications of this
work for future augmented reality systems based upon mobile smart phones
are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in understanding the biology of vision show that humans use not only bottom-up, feature-based information in
visual analysis, but also top-down contextual information. To reflect this method of processing, we developed a
technology called CASSIE for Science Applications International Corporation (SAIC) that uses low-level image features
and contextual cues to determine the likelihood that a certain target will be found in a given area.
CASSIE is a tool by which information from various data layers can be probabilistically combined to determine spatial
and informational context within and across different types of data. It is built on a spatial foundation consisting of a two-dimensional
hexagonal, hierarchical grid structure for data storage and access. This same structure facilitates very fast
computation of information throughout the hierarchy for all data layers, as well as fast propagation of probabilistic
information derived from those layers.
Our research with CASSIE investigates the effectiveness of generated probability maps to reflect a human interpretation,
potential benefits in terms of accuracy and processing speed for subsequent target detection, and methods for
incorporating feedback from target detection algorithms to apply additional contextual constraints (for example,
allowable or expected target groupings). We discuss further developments such as learning in CASSIE and how to
incorporate additional data modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FLIR images are essential for the detection and recognition of ground targets. Small targets can be enhanced using super-resolution
techniques to improve the effective resolution of the target area using a sequence of low-resolution images.
However, when there is significant cloud cover, several problems can arise: clouds can obscure a target (partially or
fully), they can affect the accuracy of image registration algorithms, and they can reduce the contrast of the object
against the background. To reconstruct an image in the presence of cloud cover, image correlation metrics from optical
flow and a robust super-resolution algorithm have been used to compile a 'best' frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Roads are a necessary condition for the social and economical development of regions. We present a methodology for
rural road extraction from SPOT images. Our approach is centered in a fusion algorithm based on the Hermite transform
that allows increasing the spatial resolution to 2.5 m. The Hermite transform is an image representation model that
mimics some of the more important properties of human vision such as multiresolution and the Gaussian derivative
model of early vision. Analyzing the directional energy of the expansion coefficients allows classifying the image
according to the local pattern dimensionality; roads are associated to 1D patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral sensors are commonly used to measure the intensity of optical radiation and to provide spectral information
about the distribution of material components in a given scene, over a limited number of wave bands. By exploiting the
polarization of light to measure information about the vector nature of the optical field across a scene, collected
polarimetric images have the potential to provide additional information about the shape, shading, roughness, and
surface features of targets of interest. The overall performance of target detection algorithms could thus be increased by
exploiting these polarimetric signatures to discriminate man-made objects against different natural backgrounds. This is
achieved through the use of performance metrics, derived from the computed Stokes parameters, defining the degree of
polarization of man-made objects. This paper describes performance metrics that have been developed to optimize the
image acquisition of selected polarization angle and degree of linear polarization, by using the Poincare sphere and
Stokes vectors from previously acquired images, and then by extracting some specific features from the polarimetric
images. Polarimetric signatures of man-made objects have been acquired using a passive polarimetric imaging sensor
developed at DRDC Valcartier. The sensor operates concomitantly (bore-sighted images, aligned polarizations) in the
visible, shortwave infrared, midwave infrared, and the long-wave infrared bands. Results demonstrate the improvement
of using these performance metrics to characterize the degree of polarization of man-made objects using passive
polarimetric images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on
a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image
pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively
reduced levels of detail (LODs). We are jumping off from the over-segmented image represented by polygons attributed
with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the
polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka's
Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their
pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural
analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the
constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area
monitoring to quickly guide the further analysis to regions of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Quad-Emissive Display (QED) is a device that is designed to provide suitable emissive energy in four spectral
bands to permit the simultaneous evaluation of sensors with different spectral sensitivities. A changeable target pattern,
such as a Landolt C, a tumbling "E", a triangle or a bar pattern, is fabricated as a stencil (cutout) that is viewed against a
second, black surface located several centimeters behind the stencil and thermally isolated from the stencil target. The
sensor spectral bands of interest are visible (0.4 to 0.7 microns), near infrared (0.7 to 1.0 microns), short wave infrared
(1.0 to 3.0 microns) and the long wave infrared (8.0 to 14.0 microns). This paper presents the details of the structure of
the QED and preliminary results on the types of sensor/display resolution measurements and psychophysical studies
that can be accomplished using the QED.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Higher-order spectral analysis is one approach to detecting deviations from normality in a received signal. In
particular the auto-bispectral density function or "bispectrum" has been used in a number of detection applications.
Both Type-I and Type-II errors associated with bispectral detection schemes are well understood if the
processing is performed on the received signal directly or if the signal is pre-processed by a linear, time invariant
filter. However, there does not currently exist an analytical expression for the bispectrum of a non-Gaussian
signal pre-processed by a nonlinear filter. In this work we derive such an expression and compare the performance
of bispectral-based detection schemes using both linear and nonlinear receivers. Comparisons are presented in
terms of both Type-I and Type-II detection errors using Receiver Operating Characteristic curves. It is shown
that using a nonlinear receiver can offer some advantages over a linear receiver. Additionally, the nonlinear
receiver is optimized using genetic programming (differential evolution) to choose the filter coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Targeting people or objects by passive acoustic sensors is of relevant interest in several military and civil applications,
spanning from surveillance and patrolling systems to teleconferencing and human-robot interaction. To date methods
and patents focused solely on the use of beamforming algorithms to compute the time of arrival of sounds detected by
using omnidirectional microphones (OM) sparsely deployed. This paper describes the preliminary results of a novel
approach devoted to the localization of ground borne acoustic sources. It is demonstrated that an array made of at least
three unidirectional microphones can be exploited to detect the position the source. Pulse features extracted either in the
time domain or in the frequency domain are used to identify the direction of the incoming sound. This information is
then fed into a semi-analytical algorithm devoted to the identification of the source location. The novelty of the method
presented here consists on the use of unidirectional microphones rather than omnidirectional microphones and on the
ability to extract the sound direction by considering features like the pulse amplitude rather than the pulse arrival time.
It is believed that this method may pave the road toward a new generation of reduced size sound detectors and
localizers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current sonar and radar applications use interferometry to estimate the arrival angles of backscattered signals at
time-sampling rate. This direction-finding method is based on a phase-difference measurement between two close
receivers. To quantify the associated bathymetric measurement quality, it is necessary to model the statistical
properties of the interferometric-phase estimator. Thus, this paper investigates the received signal structure,
decomposing it into three different terms: a part correlated on the two receivers, an uncorrelated part and an
ambient noise term.
This paper shows that the uncorrelated part and the noise term can be merged into a unique, random term
damaging the measurement performance. Concerning the correlated part, its modulus can be modeled either as
a random or a constant variable according to the type of underwater acoustic application. The existence of these
two statistical behaviors is verified on real data collected from different underwater scenarios such as a horizontal
emitter-receiver communication and a bathymetric seafloor survey. The physical understood of the resulting
phase distributions makes it possible to model and simulate the interferometric-signal variance (associated with
the measurement accuracy) according to the underwater applications through simple hypotheses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex (dusty) plasmas - consisting of micron-sized grains within an ion-electron plasma - are a unique vehicle for
studying the kinematics of microscopic systems. Although they are mesoscopic, they embody many of the major
structural properties of conventional condensed matter systems (fluid-like and crystal-like states) and they can be used
to probe the structural dynamics of such complex systems. Modern state estimation and tracking techniques allow
complex systems to be monitored automatically and provide mechanisms for deriving mathematical models for the
underlying dynamics - identifying where known models are deficient and suggesting new dynamical models that better
match the experimental data. This paper discusses how modern tracking and state estimation techniques can be used to
explore and control important physical processes in complex plasmas: such as phase transitions, wave propagation and
viscous flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fuzzy logic perimeter intrusion classification algorithm (FLPICA) has been developed to determine when intruders
have crossed a perimeter and to classify the type of intrusion. The FLPICA works in real-time and makes its decisions
based on data from single or multiple geophones. Extensive discussions of some of the fuzzy decision trees and fuzzy
membership functions making up the algorithm are provided. The geophones can be one to three axes geophones, i.e.,
they can provide information in only one dimension or three. The FLPICA uses various signal processing algorithms to
extract the quantities that facilitate decisions. The parameters that are extracted in real time from the data are the
cadence of walkers, runners, jumpers, etc.; the bearing of intruders; power measures for the signal; and the signal's
kurtosis. The FLPICA is applicable to many different environments and can be retrained as needed. It is based on rules
born of human expertise. The FLPICA is applicable to many different scenarios, e.g., classifying intruders as walkers,
runners, creepers, orbiters, jumpers, vehicles, animals, etc. It can also make a declaration as to the threat status of the
intruder. The FLPICA can separate the signal of a human intruder on foot from those of vehicles and other noise
sources. Examples where the intruder exhibits the behaviors of walkers, runners, creepers and orbiters are provided.
Theoretical and simulation results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many different lasers are deployed in the battlefield for range finding, target designation, communications, dazzle,
location of targets, munitions guidance, and destruction. Laser warning devices on military systems detect and
identify lasers striking them in order to assess their threat level and plan avoidance or retaliation. Types of
lasers and their characteristics are discussed: power, frequency, coherence, bandwidth, direction, pulse length
and modulation. We describe three approaches for laser warning devices from which specific cases may be tailored:
simultaneous estimation of direction and wavelength with a grating, wavefront direction only estimation for low
light levels with lenses, absolute simultaneous wavelength only estimation with a Fizeau interferometer. We
investigate the feasibility and compare the suitability of these approaches for different applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radars are used for various purposes, and we need flexible methods to explain radar response phenomena. In general,
modeling radar response and backscatterers can help in data analysis by providing possible explanations for
measured echoes. However, extracting exact physical parameters of a real world scene from radar measurements
is an ill-posed problem.
Our study aims to enhance radar signal interpretation and further to develop data classification methods. In
this paper, we introduce an approach for finding physically sensible explanations for response phenomena during
a long illumination. The proposed procedure uses our comprehensive response model to decompose measured
radar echoes. The model incorporates both a radar model and a backscatterer model. The procedure adapts
the backscatterer model parameters to catch and reproduce a measured Doppler spectrum and its dynamics at a
particular range and angle. A filter bank and a set of features are used to characterize these response properties.
The procedure defines a number of point-scatterers for each frequency band of the measured Doppler spectrum.
Using the same features calculated from simulated response, it then matches the parameters-the number of
individual backscatterers, their radar cross sections and velocities-to joint Doppler and amplitude behavior of the
measurement. Hence we decompose the response toward its origin. The procedure is scalable and can be applied
to adapt the model to various other features as well, even those of more complex backscatterers. Performance
of the procedure is demonstrated with radar measurements on controlled arrangement of backscatterers with a
variety of motion states.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The atmosphere surrounding earth surface plays an important role in the daily life of human beings. Total suspended
particulates (TSP) found in the atmosphere are made up of many compounds including soil, nitrate, sulphate, soot and
organic carbon. In this study, an optical sensor has been designed and developed for measuring TSP concentrations in
air. Our aim is to measure TSP concentrations in polluted air by using a newly developed optical sensor. The developed
spectral sensor was based on the measured radiation transmitted through the air samples. Three pairs of visible LEDs and
photodiodes are used as sensing emitters and detectors respectively. The transmitted radiation was measured in terms of
the output voltage of the photodetector of the sensor system. A new multispectral algorithm has been developed to
correlate TSP concentrations and the transmitted radiation. The newly developed system produced a high degree of
accuracy with the correlation coefficient of 0.999 and the root-mean-square error of 5.535mg/l. The TSP concentration
can be measured and monitored accurately using this sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric fine particles with diameter less than 10 micron (PM10) have already become a major public health concern
in any part of the world. These particles can be breathed more deeply into the lungs to cause adverse health effects. In
order to prevent long exposure to these harmful atmospheric particles, this motivates a growing interest in developing
efficient methods of fine particles monitoring. In this study, we proposed that an internet protocol camera and
LANDSAT5 satellite image be used to monitor the temporal and spatial air quality. We developed an algorithm to
converts multispectral image pixel values acquired from digital images into quantitative values of the concentrations of
PM10. This algorithm was developed based on the regression analysis of relationship between the measured reflectance
and the reflected components from a surface material and the ambient air. The computed PM10 values were compared to
other standard values measured by a DustTrak meter. The correlation results showed that the newly develop algorithm
produced a high degree of accuracy as indicated by high correlation coefficient (R2) and low root-mean-square-error
(RMS). This study indicates that the temporal and spatial development of air quality can be monitored by using internet
protocol camera and LANDSAT5 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The online version of the following paper was replaced on 28 May 2019 with a revision provided by the author:
Ronald Mahler "The multisensor PHD filter: II. Erroneous solution via Poisson magic," Proc. SPIE 7336, Signal Processing, Sensor Fusion, and Target Recognition XVIII, 73360D (11 May 2009); doi: 10.1117/12.818025
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The online version of the following paper was replaced on 28 May 2019 with a revision provided by the author:
Ronald Mahler "The multisensor PHD filter: I. General solution via multitarget calculus," Proc. SPIE 7336, Signal Processing, Sensor Fusion, and Target Recognition XVIII, 73360E (11 May 2009); doi: 10.1117/12.818024
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.