PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8392, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
Fusion of passive electronic support measures (ESM) with active radar data enables tracking and identification of
platforms in air, ground, and maritime domains. An effective multi-sensor fusion architecture adopts hierarchical
real-time multi-stage processing. This paper focuses on the recursive filtering challenges. The first challenge is to
achieve effective platform identification based on noisy emitter type measurements; we show that while optimal
processing is computationally infeasible, a good suboptimal solution is available via a sequential measurement
processing approach. The second challenge is to process waveform feature measurements that enable
disambiguation in multi-target scenarios where targets may be using the same emitters. We show that an approach
that explicitly considers the Markov jump process outperforms the traditional Kalman filtering solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work derives the Cramer-Rao lower bound (CRLB) for an acoustic target and sensor localization system
in which the noise characteristics depend on the location of the source. The system itself has been previously
examined, but without deriving the CRLB and showing the statistical efficiency of the estimator used. Two
different versions of the CRLB are derived, one in which direction of arrival (DOA) and range measurements
are available ("full-position CRLB"), and one in which only DOA measurements are available ("bearing-only
CRLB"). In both cases, the estimator is found to be statistically efficient; but, depending on the sensor-target
geometry, the range measurements may or may not significantly contribute to the accuracy of target localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of estimating the performance of a system that tracks moving objects on the ground using
airborne sensors. Expected Track Life (ETL) is a measure of performance that indicates the ability of a tracker to
maintain track for extended periods of time. The most desirable method for computing ETL would involve the use of
large sets of real data with accompanying truth. This accurately accounts for sensor artifacts and data characteristics,
which are difficult to simulate. However, datasets with these characteristics are difficult to collect because the coverage
area of the sensors is limited, the collection time is limited, and the number of objects that can realistically be truthed is
also limited. Thus when using real datasets, many tracks are terminated because the objects leave the field of view or the
end of the dataset is reached. This induces a bias in the estimation when the ETL is computed directly from the tracks.
An alternative to direct ETL computation is the use of Markov-Chain models that use track break statistics to estimate
ETL. This method provides unbiased ETL estimates from datasets much shorter than what would be required for direct
computation. In this paper we extend previous work in this area and derive an explicit expression of the ETL as a
function of track break statistics. An example illustrates the properties and advantages of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will introduce a new Multitarget Multi-Bernoulli (MeMBer) recursion for tracking targets traveling
under multiple motion models. The proposed interacting multiple model MeMBer (IMM-MeMBer) filter uses
Jump Markov Models (JMM) to extended the basic MeMBer recursion to allow for multiple motion models. This
extension is implemented using both the SMC and GM based MeMBer approximations. The recursive prediction
and update equations are presented for both implementations. Each multiple model implementation is validated
against its respective standard MeMBer implementation as well as against each other. This validation is done
using a simulated scenario containing multiple maneuvering targets. A variety of metrics are observed including
target detection capability, estimate accuracy and model likelihood determination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking with ambiguous Doppler measurements as well as position measurements is investigated. This paper
presents a method using Gaussian Mixture representation of the Doppler measurement uncertainty. The conditional
probability of target Doppler given an ambiguous Doppler measurement is approximated by a Gaussian sum of several
possible unambiguous Doppler. Then the Gaussian Mixture filter based on the unscented Kalman filter (UKF) is
presented to solve the problem of state estimation from measurements with Doppler ambiguity. Simulation results
demonstrate the effectiveness of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
There are hosts of target tracking algorithm approaches, each valued with respect to the scenario operating conditions (e.g.
sensors, targets, and environments). Due to the application complexity, no algorithm is general enough to be widely
applicable, nor is a tailored algorithm able to meet variations in specific scenarios. Thus, to meet real world goals,
multitarget tracking (MTT) algorithms need to undergo performance assessment for (a) bounding performance over
various operating conditions, (b) managing expectations and applicability for user acceptance, and (c) understanding the
constraints and supporting information for reliable and robust performance. To meet these challenges, performance
assessment should strive for three goals: (1) challenge problem scenarios with a rich variety of operating conditions, (2) a
standard, but robust, set of metrics for evaluation, and (3) design of experiments for sensitivity analysis over parameter
variation of models, uncertainties, and measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we present an evaluation of several multi-target tracking methods based on simulated scenarios in the
maritime domain. In particular, we consider variations of the Joint Integrated Probabilistic Data Association (JIPDA)
algorithm, namely the Linear Multi-Target IPDA (LMIPDA), Linear Joint IPDA (LJIPDA), and Markov Chain Monte
Carlo Data Association (MCMCDA). The algorithms are compared with respect to an extension of the Optimal
Subpattern Assignment (OSPA) metric, the Hellinger distance and further performance measures. As no single algorithm
is equally well fitted to all tested scenarios, our results show which algorithms fits best for specific scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increased availability of Graphical Processing Units (GPUs) in personal computers has made parallel pro-
gramming worthwhile, but not necessarily easier. This paper will take advantage of the power of a GPU, in
conjunction with the Central Processing Unit (CPU), in order to simulate target trajectories for large-scale
scenarios, such as wide-area maritime or ground surveillance. The idea is to simulate the motion of tens of
thousands of targets using a GPU by formulating an optimization problem that maximizes the throughput. To
do this, the proposed algorithm is provided with input data that describes how the targets are expected to
behave, path information (e.g., roadmaps, shipping lanes), and available computational resources. Then, it is
possible to break down the algorithm into parts that are done in the CPU versus those sent to the GPU. The
ultimate goal is to compare processing times of the algorithm with a GPU in conjunction with a CPU to those
of the standard algorithms running on the CPU alone. In this paper, the optimization formulation for utilizing
the GPU, simulation results on scenarios with a large number of targets and conclusions are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information based sensor management principles have been applied to an ad hoc network of commercial off the shelf
(COTS) unattended ground sensors (iScout®) utilized for container security in order to extend the sensor platform's
operational life by minimizing power consumption. The methodology and results of a simple field demonstration will be
presented. Each unattended ground sensor contains a multiplicity of heterogeneous sensors and is treated as a sensor
platform. While the use of an unmodified COTS sensor platform precluded implementation of a complete information
based sensor management (IBSM) system, advantage was taken of the redundant coverage of clustered sensor platforms
and individual sensor platform control to extend their operational life without loss of situation awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR)
operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR
ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of
voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger
ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal
plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor
collections through automated processing to ISR asset control.
Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon
controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track
simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the
previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor,
multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and
classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized
using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has
been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using
high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring
tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are
insufficiently responsive. Simulation experiment results are presented.
The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by
Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to
minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate
tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object
characterization, jointly, in computing reward and cost for optimizing tasking decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
In several previous publications the first author has proposed a "generalized likelihood function" (GLF) approach
for processing nontraditional measurements such as attributes, features, natural-language statements, and inference
rules. The GLF approach is based on random set "generalized measurement models" for nontraditional
measurements. GLFs are not conventional likelihood functions, since they are not density functions and their
integrals are usually infinite, rather than equal to 1. For this reason, it has been unclear whether or not the
GLF approach is fully rigorous from a strict Bayesian point of view. In a recent paper, the first author demonstrated
that the GLF of a specific type of nontraditional measurement-quantized measurements-is rigorously
Bayesian. In this paper we show that this result can be generalized to arbitrary nontraditional measurements,
thus removing any doubt that the GLF approach is rigorously Bayesian.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number
of targets and their state vectors from a set of observations. The PHD filter is capable of working well in
scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the
literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture
Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state
estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but
restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples
at discrete points in the state space instead of a continuous estimate of the probability density function of the
system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based
Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability
density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and
the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In
addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where
the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at
all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its
continuous nature. The S-PHD filter derivations and simulations are provided in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target class measurements, if available from automatic target recognition systems, can be incorporated into
multiple target tracking algorithms to improve measurement-to-track association accuracy. In this work, the
performance of the classifier is modeled as a confusion matrix, whose entries are target class likelihood functions
that are used to modify the update equations of the recently derived multiple models CPHD (MMCPHD)
filter. The result is the new classification aided CPHD (CACPHD) filter. Simulations on multistatic sonar datasets
with and without target class measurements show the advantage of including available target class information
into the data association step of the CPHD filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a decentralized fusion and tracking based on a distributed multi-source multitarget
filtering and robust communication with the following features: (i) data reduction; (ii) a disruption tolerant dissemination
procedure that takes advantage of storage and mobility; and (iii) efficient data set reconciliation algorithms.
We developed and implemented complex high-fidelity marine application demonstration of this approach that encompasses
all relevant environmental parameters. In the simulated example, multi-source information is fused by
exploiting sensors from disparate Unmanned Underwater Vehicles (UUV) and Unmanned Surface Vehicle (USV)
multi-sensor platforms. Communications among the platforms are continuously establishing and breaking depending
on the time-changing geometry. We compare and evaluate the developed algorithms by assessing their performance
against different scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work has shown that the random finite set (RFS) approach to multi-target tracking is a computationally
viable alternative to the traditional data association based approaches. An assumption in these approaches is
that the targets move independently of each other. In this paper, we introduce the concept of a random finite
graph and study its application to the tracking of interacting vehicles in traffic. A random finite graph is a
random variable taking values in the set of finite directed graphs. The graph describes the influence of vehicles
on the motion of other vehicles. The connected components of the graph define groups of vehicles that move
independently of other groups. We treat the connected components as the independent entities upon which to
perform RFS-based tracking. The approach is illustrated with an arterial traffic simulation in which vehicles
interact among themselves through car-following and with traffic control devices at intersections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following
measurement model: (a) targets are point targets, (b) every target generates at most a single measurement,
and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as
surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that
is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the
first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement
models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize
a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable
superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a general approach for deriving PHD/CPHD filters that must estimate the background
clutter process, rather than being provided with it a priori. I first derive general time- and measurementupdate
equations for clutter-agnostic PHD filters. I then consider two different Markov motion models. For
the Uncoupled Motion (UM) model, targets can transition only to targets, and clutter generators can transition
only to clutter generators. For the Coupled Motion (CM) model, targets can transition to clutter generators and
vice-versa. I demonstrate that R. Streit's "multitarget intensity filter" (MIF) is actually a PHD filter with a CM
model. Streit has made the following claims for the MIF: it subsumes the conventional PHD filter as a special
case, and can estimate both the clutter rate λk+1 and the target-birth rate Bk+1|k. I exhibit counterexamples to
these claims. Because of the CM model, the MIF (1) does not subsume the conventional PHD filter as a special
case; (2) cannot estimate Bk+1|k when there are no clutter generators; and (3) cannot estimate λk+1 when
the target birth-rate and target death-rate are "conjugate." By way of contrast, PHD filters with UM models
do include the PHD filter as a special case, and can estimate the clutter intensity function κk+1(z). I also show
that the MIF is essentially identical to the UM-model PHD filter when the target birth-rate and death-rate are
both small.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
We introduce a digital microarray analogy for accomplishing multi-intelligence source (multiINT) fusion. Spatial Voting
is used to combine disparate information sources (numerical, text, imagery, binary, and temporal). We discuss how in
previous work we have overcome several of the key challenges for multi-source intelligence data fusion. We then show
how text information can be combined with numerical information and imagery to provide a fractal fingerprint for
behavior recognition and anomaly detection, resulting in a direct analogy to microarray analysis. We show that the
information extracted from our fusion methods is in a form suitable for prediction. We demonstrate how this is
accomplished, providing examples and details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach for discriminating among dierent classes of imagery in a scene. Our intended application
is the detection of small watercraft in a littoral environment where both targets and land- and sea-based clutter
are present. The approach works by training dierent overcomplete dictionaries to model the dierent image
classes. The likelihood ratio obtained by applying each model to the unknown image is then used as the
discriminating test statistic. We rst demonstrate the approach on an illustrative test problem and then apply
the algorithm to short-wave infrared imagery with known targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In maritime operational scenarios, such as smuggling, piracy, or terrorist threats, it is not only relevant who or what an
observed object is, but also where it is now and in the past in relation to other (geographical) objects. In situation and
impact assessment, this information is used to determine whether an object is a threat. Single platform (ship, harbor) or
single sensor information will not provide all this information. The work presented in this paper focuses on the sensor
and object levels that provide a description of currently observed objects to situation assessment. For use of information
of objects at higher information levels, it is necessary to have not only a good description of observed objects at this
moment, but also from its past. Therefore, currently observed objects have to be linked to previous occurrences.
Kinematic features, as used in tracking, are of limited use, as uncertainties over longer time intervals are so large that no
unique associations can be made. Features extracted from different sensors (e.g., ESM, EO/IR) can be used for both
association and classification. Features and classifications are used to associate current objects to previous object
descriptions, allowing objects to be described better, and provide position history.
In this paper a description of a high level architecture in which such a multi-sensor association is used is described.
Results of an assessment of the usability of several features from ESM (from spectrum), EO and IR (shape, contour,
keypoints) data for association and classification are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visible and infrared video cameras are the most common imaging sensors used for video surveillance systems. Fusing
concurrent visible and infrared imageries may further improve the overall detection and tracking performance of a video
surveillance system. We performed image fusion using 13 pixel-based image fusion algorithms and examined their
effects on the detection and tracking performance of a given target tracker. We identified five pyramid-based methods
that produced significantly better performance, three of which also managed to achieve that with a relatively high
efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The probability of detection is a key performance metric that assesses a receiver's ability to detect the presence of a
signal. Receiver performance is evaluated by comparing empirical measurements against an exact or a bounded
theoretical limit. If the detection statistic is based on multiple, independent measurements, it is relatively straight forward
to formulate the joint probability density function (PDF) as a multi-variate Gaussian distribution (MVG). In this work,
we consider the detection statistic that arises when combining correlated measurements from a two-dimensional array of
sensors. The joint PDF does not readily fit into a multi-variate Gaussian model. We illustrate a method by which we can
construct a block-diagonal covariance matrix that can be used to cast the joint PDF into the standard MVG form. This
expression can then be evaluated numerically to compute a theoretical probability of detection. We validate the
authenticity of the joint PDF using Monte-Carlo simulations. We quantify the impact of correlated measurements on the
probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional trackers provide the human operator with estimated target tracks. It is desirable to make higher
level inference of the target behaviour/intent (e.g., trajectory inference) in an automated manner. One such
approach is to use stochastic context-free grammars and the Earley-Stoelcke parsing algorithm. The problem of
inference is reformulated as one of parsing. In this paper, the consistency of stochastic context-free grammars
is reviewed. Some examples illustrating the constraints on SCFGs due to consistency are presented, including a
toy SCFG that has been used to successfully parse real GMTI radar data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
This survey paper provides a review of tools and concepts of visual analytics, and the challenges faced by researchers
developing application for knowledge discovery. A comparison is made based on analytic features, its ability to
categorize data, the modeling procedures, visual representation, interoperability, and its reliability and portability. The
issues related to heterogeneous data, its scalability and multi-dimensionality is also explored. An efficient, intelligent,
interactive and robust visual analytics system allows the discovery of information hidden in a massive and dynamic
volume of data, especially in a surveillance system thus creating an effective situation awareness of the environment.
While visual analytics is hugely important in knowledge discovery, it is necessary for developers to avoid information
overload due to inappropriate, irrelevant and uncertain data due to random or fuzzy sensor inputs, also known as noise.
The discovered knowledge is the basis for adaptive situation awareness, as it often provides information beyond the
perception of human cognitive mind. The tools and concepts researched for this article includes addressing the human
computer interaction aspect for intelligent, adaptive decision making from multiple information resources. An attempt is
made in this paper to combine the strengths of smart search and data analysis with visual perception and interactive
analysis capability of the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Application of acoustic sensors in Persistent Surveillance Systems (PSS) has received considerable attention over the last
two decades because they can be rapidly deployed and have low cost. Conventional utilization of acoustic sensors in PSS
spans a wide range of applications including: vehicle classification, target tracking, activity understanding, speech
recognition, shooter detection, etc. This paper presents a current survey of physics-based acoustic signature classification
techniques for outdoor sounds recognition and understanding. Particularly, this paper focuses on taxonomy and ontology
of acoustic signatures resulted from group activities. The taxonomy and supportive ontology considered include: humanvehicle,
human-objects, and human-human interactions. This paper, in particular, exploits applicability of several
spectral analysis techniques as a means to maximize likelihood of correct acoustic source detection, recognition, and
discrimination. Spectral analysis techniques based on Fast Fourier Transform, Discrete Wavelet Transform, and Short
Time Fourier Transform are considered for extraction of features from acoustic sources. In addition, comprehensive
overviews of most current research activities related to scope of this work are presented with their applications.
Furthermore, future potential direction of research in this area is discussed for improvement of acoustic signature
recognition and classification technology suitable for PSS applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human Activity Discovery & Recognition (HADR) is a complex, diverse and challenging task but yet an active area of
ongoing research in the Department of Defense. By detecting, tracking, and characterizing cohesive Human interactional
activity patterns, potential threats can be identified which can significantly improve situation awareness, particularly, in
Persistent Surveillance Systems (PSS). Understanding the nature of such dynamic activities, inevitably involves
interpretation of a collection of spatiotemporally correlated activities with respect to a known context. In this paper, we
present a State Transition model for recognizing the characteristics of human activities with a link to a prior contextbased
ontology. Modeling the state transitions between successive evidential events determines the activities'
temperament. The proposed state transition model poses six categories of state transitions including: Human state
transitions of Object handling, Visibility, Entity-entity relation, Human Postures, Human Kinematics and Distance to
Target. The proposed state transition model generates semantic annotations describing the human interactional activities
via a technique called Casual Event State Inference (CESI). The proposed approach uses a low cost kinect depth camera
for indoor and normal optical camera for outdoor monitoring activities. Experimental results are presented here to
demonstrate the effectiveness and efficiency of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kinect cameras produce low-cost depth map video streams applicable for conventional surveillance systems. However,
commonly applied image processing techniques are not directly applicable for depth map video processing. Kinect depth
map images contain range measurement of objects at expense of having spatial features of objects suppressed. For
example, typical objects' attributes such as textures, color tones, intensity, and other characteristic attributes cannot be
fully realized by processing depth map imagery. In this paper, we demonstrate application of Kinect depth map and
optical imagery for characterization of indoor and outdoor group activities. A Casual-Events State Inference (CESI)
technique is proposed for spatiotemporal recognition and reasoning of group activities. CESI uses an ontological scheme
for representation of casual distinctiveness of a priori known group activities. By tracking and serializing distinctive
atomic group activities, CESI allows discovery of more complex group activities. A Modified Sequential Hidden
Markov Model (MS-HMM) is implemented for trail analysis of atomic events representing correlated group activities.
CESI reasons about five levels of group activities including: Merging, Planning, Cooperation, Coordination, and
Dispersion. In this paper, we present results of capability of CESI approach for characterization of group activities taking
place both in indoor and outdoor. Based on spatiotemporal pattern matching of atomic activities representing a known
group activities, the CESI is able to discriminate suspicious group activity from normal activities. This paper also
presents technical details of imagery techniques implemented for detection, tracking, and characterization of atomic
events based on Kinect depth map and optical imagery data sets. Various experimental scenarios in indoors and outdoors
(e.g. loading and unloading of objects, human-vehicle interactions etc.,) are carried to demonstrate effectiveness and
efficiency of the proposed model for characterization of distinctive group activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a unified cooperative control architecture (UCCA) that supports effective cooperation of
Unmanned Aerial Vehicles (UAVs) and learning capabilities for UAV missions. Main features of the proposed UCCA
include: i) it has a modular structure; each function module focuses on a particular type of task and provide services to
other function modules through well defined interfaces; ii) it allows the efficient sharing of UAV control and onboard
resources by the function modules and is able to effectively handle simultaneously multiple objectives in the UAV
operation; iii) it facilitates the cooperation among different function modules; iv) it supports effective cooperation among
multiple UAVs on a mission's tasks, v) an objective driven learning approach is also supported, which allows UAVs to
systematically explore uncertain mission environments to increase the level of situation awareness for the achievement
of their mission/task objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications IV
Distributed processing of multiple sensor data has advantages over centralized processing because of lower bandwidth
for communicating data and lower processing load at each site. However, distributed fusion has to address dependence
issues not present in centralized fusion. Bayesian distributed fusion combines local probabilities or estimates to generate
the results of centralized fusion by identifying and removing redundant common information. Approximation of
Bayesian distributed fusion provides practical algorithms when it is difficult to identify the common information.
Another distributed fusion approach combines estimates with known means and cross covariances according to some
optimality criteria. Distributed object tracking involves both track to track association and track state estimate fusion
given an association. Track state estimate fusion equations can be obtained from distributed estimation equations by
treating the state as a random process with measurements that are accumulated over time. For objects with deterministic
dynamics, the same fusion equations for static states can be used. When the object state has non-deterministic dynamics,
reconstructing the centralized estimate from the local estimates is usually not possible, but fusion equations based on
means and cross covariances are still optimal with respect to their criteria. It is possible to fuse local estimates to
duplicate the results of centralized tracking but the local estimates are not locally optimal and the weighting matrices
depend on covariance matrices from other sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We respond to the very thorough analysis of
singularities in the incompressible flow for one dimensional
nonlinear filters by Chen and Mehra. We emphasize that
the singularities occur at a few points in d-dimensional state
space, and thus the chance of hitting a singularity is very
small for dimensions higher than one. Furthermore, in the
unlikely event of hitting a singularity, there is ample room to
flow around it in spaces of dimension higher than one.
Moreover, the deep mathematical theory of incompressible
particle flow that was developed recently by Shnirelman can
be used to provide insight into why our particle flow
algorithms work so well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determining a decision from data is an important DoD research area with far-reaching applications. In particular,
the long-elusive goal of autonomous machines discovering the relations between entities within a situation has
proved to be extremely dicult. Many current sensing systems are devoted to fusing information from a variety
of heterogeneous sensors in order to characterize the entities and relationships in the data. This leads to the
need for representations of relationships and situations which can model the uncertainty that is present in any
system. We develop mathematics for representing a situation where the relations are uncertain and use the work
of Meng to show how to compare probabilistic relations and situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A detection system outputs two distinct labels. Thus, there are two errors it can make. The Receiver Operating Characteristic
(ROC) function quantifies both of these errors as parameters vary within the system. Combining two detection systems
typically yields better performance when a combination rule is chosen appropriately. When multiple detection systems
are combined, the assumption of independence is usually made in order to mathematically combine the individual ROC
functions for each system into one ROC function. This paper investigates feature fusion of multiple detection systems.
Given that one knows the ROC function for each individual detection system, we seek a formula with the resultant ROC
function of the fused detection systems as a function (specifically, a transformation) of the respective ROC functions. In
this paper we derive this transformation for a certain class of feature rules. An example will be given that demonstrates
this transformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications V
A classification system with N possible output labels (or decisions) will have N(N-1) possible errors. The
Receiver Operating Characteristic (ROC) manifold was created to quantify all of these errors. When multiple
classication systems are fused, the assumption of independence is usually made in order to mathematically
combine the individual ROC manifolds for each system into one ROC manifold. This paper will investigate
the label fusion (also called decision fusion) of multiple classication systems that have the same number
of output labels. Boolean rules do not exist for multiple symbols, thus, we will derive possible Boolean-like
rules as well as other rules that will yield label fusion rules. The formula for the resultant ROC manifold
of the fused classication systems which incorporates the individual classication systems will be derived.
Specically, given a label rule and two classication systems, the ROC manifold for the fused system is
produced. We generate formulas for other non-Boolean-like OR and non-Boolean-like AND rules and give
the resultant ROC manifold for the fused system. Examples will be given that demonstrate how each formula
is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to
enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic
algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied
strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to
individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we
present some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Bayesian network (BN) learning method using a hybrid algorithm and chaos theory is proposed. The principles of
mutation and crossover in genetic algorithm and the cloud-based adaptive inertia weight were incorporated into the
proposed simple particle swarm optimization (sPSO) algorithm to achieve better diversity, and improve the convergence
speed. By means of ergodicity and randomicity of chaos algorithm, the initial network structure population is generated
by using chaotic mapping with uniform search under structure constraints. When the algorithm converges to a local
minimal, a chaotic searching is started to skip the local minima and to identify a potentially better network structure. The
experiment results show that this algorithm can be effectively used for BN structure learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, I discuss the basic notions of the Dempster Shafer theory. Using a simple
engineering example, I highlight sources of confusion in the Dempster Shafer literature, and some
questions that arise in the course of applying the Dempster Shafer algorithm. Finally, I discuss the measure
theoretic foundation that reveals the intimate connections between the Dempster Shafer theory and
Probability theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications I
A class of adaptive data compression routines is presented based on data dependent transformation. The
current class of methods identifies improved information association by utilization of eigen-vectors rather
than the eigen-values based prioritization. The Karhounen-Loeve, KL, transform limits the importance of
the data by the limitation of the eigen-values associated with covariance of data, this leads to the
truncation of the eigenvectors by their energy levels. Therefore, the remaining data in the KL transform
method limits the information that can be represented to those data contents that represent the majority of
signal energy as identified by the prioritized eigen-values of the data covariance matrix. The method
presented in this work retains desired data structure and enables a more exact representation of the
information in the data leading to the preservation of data that can contain significantly relevant
information even-though its energy contents may be relatively low as compared to other bases. This work
presents a description of this idea along with an error analysis relative to the original data. The simulation
work applies the technique to data from an image by association of neighboring data samples. A
discussion of the simulation results though relevant metrics completes the analysis in this work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current aviation security relies heavily on personnel screening using X-ray backscatter systems or other advanced
imaging technologies. Passenger privacy concerns and screening times can be reduced through the use of low-dose twosided
X-ray backscatter (Bx) systems, which also have the ability to collect transmission (Tx) X-ray. Bx images reveal
objects placed on the body, such as contraband and security threats, as well as anatomical features at or close to the
surface, such as lungs cavities and bones. While the quality of the transmission images is lower than medical imagery
due to the low X-ray dose, Tx images can be of significant value in interpreting features in the Bx images, such as lung
cavities, which can cause false alarms in automated threat detection (ATD) algorithms. Here we demonstrate an ATD
processing chain fusing both Tx and BX images. The approach employs automatically extracted fiducial points on the
body and localized active contour methods to segments lungs in acquired Tx and Bx images. Additionally, we derive
metrics from the Tx image can be related to the probability of observing internal body structure in the Bx image. The
combined use of Tx and Bx data can enable improved overall system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resolution is often provided as one of the key parameters addressing the quality capability of a sensor. One traditional
approach to determining the resolution of a sensor/display system is to use a resolution target pattern to find the smallest
target size for which the critical target element can be "resolved" using the sensor/display system, which usually requires
a human in the loop to make the assessment. In previous SPIE papers we reported on a synthetic observer approach to
determining the point at which a Landolt C resolution target was resolved; a technique with marginal success when
compared to human observers. This paper compares the results of the previously developed synthetic observer approach
using a Landolt C with a new synthetic observer approach based on Triangle Orientation Detection (TOD). A large
collection of multi-spectral (visible, near infra-red, and thermal) sensor images of triangle and Landolt C resolution
targets were recorded at a wide range of distances. Each image contained both the triangle and the Landolt C resolution
targets as well as a person holding a weapon or other object. The images were analyzed using the two different synthetic
observer approaches, one for triangles and one for Landolt Cs, and the results compared with each other for the three
different sensors. This paper describes the results and planned future effort to compare the results with human visual
performance for both the resolution targets and for the hand-held objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence
and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning.
In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a
need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning
systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters
(relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the
essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The
method is compared to a manual calibration method using real HD video data from a field trial with a multicamera
positioning system. The method is also evaluated on simulated data from a stereo camera model. The
results show that the reprojection error of the automated camera calibration method is close to or smaller than
the error for the manual calibration method and that the automated calibration method can replace the manual
calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast moving cameras often generate distorted and blurred images characterized by reduced sharpness (due to motion
blur) and insufficient dynamic range. Reducing sensor integration times to minimize blur are often used but the light
intensity and image Signal-to-Noise-Ratio (SNR) would be reduced as well. We propose a Motion Adaptive Signal
Integration (MASI) algorithm that operates the sensor at a high frame rate, with real time alignment of individual image
frames to form an enhanced quality video output. This technique enables signal integration in the digital domain,
allowing both high SNR performance and low motion blur induced by the camera motion. We also show, in an
Extended MASI (EMASI) algorithm, that high dynamic range can be achieved by combining high frame rate images of
varying exposures. EMASI broadens the dynamic range of the sensor and extends the sensitivity to work in low light
and noisy conditions. In a moving platform, it also reduces static noise in the sensor. This technology can be used in
aerial surveillance, satellite imaging, border securities, wearable sensing, video conferencing and camera phone
imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications II
Target recognition can be enhanced by reducing image degradation due to atmospheric turbulence. This is
accomplished by an adaptive optic system. We discuss the forms of degradation when a target is viewed through
the atmosphere1: scintillation from ground targets on a hot day in visible or infrared light; beam spreading and
wavering around in time; atmospheric turbulence caused by motion of the target or by weather. In the case of
targets we can use a beacon laser that reflects back from the target into a wavefront detector to measure the
effects of turbulence on propagation to and from the target before imaging.1 A deformable mirror then corrects
the wavefront shape of the transmitted, reflected or scattered data for enhanced imaging. Further, recognition
of targets is enhanced by performing accurate distance measurements to localized parts of the target using lidar.
Distance is obtained by sending a short pulse to the target and measuring the time for the pulse to return. There
is inadequate time to scan the complete field of view so that the beam must be steered to regions of interest such
as extremities of the image during image recognition. Distance is particularly valuable to recognize fine features
in range along the target or when segmentation is required to separate a target from background or from other
targets. We discuss the issues involved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A
video
data
conditioner
(VDC)
for
automated
full-motion
video
(FMV)
detection,
classification,
and
tracking
is
described.
VDC
extends
our
multi-stage
image
data
conditioner
(IDC)
to
video.
Key
features
include
robust
detection
of
compact
objects
in
motion
imagery,
coarse
classification
of
all
detections,
and
tracking
of
fixed
and
moving
objects.
An
implementation
of
the
detection
and
tracking
components
of
the
VDC
on
an
Apple
iPhone
is
discussed.
Preliminary
tracking
results
of
naval
ships
captured
during
the
Phoenix
Express
2009
Photo
Exercise
are
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose and illustrate a methodology for classifying the change detection results generated from repeatpass
polarimetric RADARSAT-2 images and segmenting only the changes of interest to a given user while suppressing
all other changes. The detected changes are first classified based on generated supervised ground-cover classification of
the polarimetric SAR images between which changes were detected. In the absence of reliable ground truth needed for
generating supervised classification training sets, we rely on the use of periodically acquired high-resolution, multispectral
optical imagery in order to classify the manually selected training sets before computing their classes' statistics
from the SAR images. The classified detected changes can then be segmented to isolate the changes of interest, as
specified by the user and suppress all other changes. The proposed polarimetric change detection, classification and
segmentation method overcomes some of the challenges encountered when visualizing and interpreting typical raw
change results. Often these non-classified change detection results tend to be too crowded, as they show all the changes
including those of interest to the user as well as other non-relevant changes. Also, some of the changes are difficult to
interpret, especially those which are attributed to a mixture of the backscatters. We shall illustrate how to generate,
classify and segment polarimetric change detection results from two SAR images over a selected region of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a portable wireless multi-camera network based system that quickly recognizes face of human subjects.
The system uses low-power embedded cameras to acquire video frames of subjects in an uncontrolled environment
and opportunistically extracts frontal face images in real time. The extracted images may have heavy motion blur,
small resolution and large pose variability. A quality based selection process is first employed to discard some of
the images that are not suitable for recognition. Then, the face images are geometrically normalized according
to a pool of four standard resolutions, by using coordinates of detected eyes. The images are transmitted to a
fusion center which has a multi-resolution templates gallery set. An optimized double-stage recognition algorithm
based on Gabor filters and simplified Weber local descriptor is implemented to extract features from normalized
probe face images. At the fusion center the comparison between gallery images and probe images acquired by a
wireless network of seven embedded cameras is performed. A score fusion strategy is adopted to produce a single
matching score. The performance of the proposed algorithm is compared to the commercial face recognition
engine Faceit G8 by L1 and other well known methods based on local descriptors. The experiments show that
the overall system is able to provide similar or better recognition performance of the commercial engine with
a shorter computational time, especially with low resolution face images. In conclusion, the designed system is
able to detect and recognize individuals in near real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking
personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction
of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random
occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories.
Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the
trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to
reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features
for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by
being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance
system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color,
which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID
and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation
obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the
likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the
surveillance system can integrate information effectively between the sensor level and activity understanding level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications III
This paper presents a LADAR recognition challenge problem. There are two unique components of this challenge
problem: the focus on condence estimates and the explicit model of the recognition output as a function
of operating conditions (OCs). To promote the development of exploitation algorithms that map OCs onto
condence estimates, a set of synthetic data has been generated that explicitly samples specic OC dimensions:
resolution, noise, aspect, obscuration, and library. Submitted algorithms will be evaluated based on their ability
to correctly estimate the condence based on OC knowledge. Tools will be provided to generate performance
metrics on data sets. Participants will submit their algorithms for evaluation on a sequestered set of data. The
resulting performance metrics will be made available online so participants can evaluate their algorithms relative
to their peers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The radar detection of targets in the presence of sea clutter has relied upon the radial velocity of targets with respect to
the radar platform either by exploiting the relative target Doppler frequency (for targets with sufficient radial velocity) or
by discerning the paths targets traverse from scan to scan. For targets with little to no rapid velocity component, though,
it can become quite difficult to differentiate targets from the surrounding sea clutter. The present paper addresses the detection
of slow-moving targets in sea clutter using the high resolution radar (HRR) based on the generalized detector
(GD) constructed in accordance with the generalized approach to signal processing (GASP) in noise such that the target
has perceptible extent in range. Under the assumption of completely random sea clutter spikes based on a ε-contaminated
mixture model with the signal and clutter powers known, the best detection performance results from using the GD
and is compared with that of the likelihood ratio test (LRT GD). For realistic sea clutter, the clutter spikes tend to be a localized
phenomenon. Based upon observations from real radar data measurements, a heuristic approach exploiting a salient
aspect of the idealized GD is developed which is shown to perform well and possesses superiority over the LRT GD
performance when applied to real measure sea clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Principal Component Analysis (PCA) has been used in a variety of applications like feature extraction for
classification, data compression and dimensionality reduction. Often, a small set of principal components are
sufficient to capture the largest variations in the data. As a result, the eigen-values of the data covariance matrix
with the lowest magnitude are ignored (along with their corresponding eigen-vectors) and the remaining eigenvectors
are used for a 'coarse' representation of the data. It is well known that this process of choosing a few
principal components naturally induces a loss in information from a signal reconstruction standpoint. We propose
a new technique to represent the data in terms of a new set of basis vectors where the high-frequency detail is
preserved, at the expense of a 'feature-scale blurring'. In other words, the 'blurring' that occurs due to possible colinearities
in the bases vectors is relative to the eigen-features' scales; this is inherently different from a systematic
blurring function. Instead of thresholding the eigen-values, we retain all eigen-values, and apply thresholds on the
components of each eigen-vector separately. The resulting basis vectors can no longer be interpreted as eigenvectors
and will, in general, lose their orthogonality properties, but offer benefits in terms of preserving detail that
is crucial for classification tasks. We test the merits of this new basis representation for magnitude synthetic
aperture radar (SAR) Automatic Target Recognition (ATR). A feature vector is obtained by projecting a SAR image
onto the aforementioned basis. Decision engines such as support vector machines (SVMs) are trained on example
feature vectors per class and ultimately used to recognize the target class in real-time. Experimental validation are
performed on the MSTAR database and involve comparisons against a PCA based ATR algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In estimating the state of thrusting/ballistic endoatmospheric projectiles for the end purpose of impact point
prediction (IPP), the total observation time, the wind effect and the sensor accuracy significantly affect the IPP
performance. First the tracker accounting for the wind effect is presented. Following this, based on the multiple
interacting multiple model (MIMM) estimator developed recently, a sensitivity study of the IPP performance with
respect to the total observation time, the wind (strength and direction) and the sensor accuracy is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.