PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7697, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
Multiple-Input Multiple-Output (MIMO) radars with widely-separated antennas have attracted much attention
in recent literature. The highly efficient performance of widely-separated MIMO radars in target detection
compared to multistatic radars have been widely studied by researchers. However, multiple target localization
by the enlightened structure has not been sufficiently explored. While Multiple Hypothesis Tracking (MHT)
based methods have been previously applied for target localization, in this paper, the well-known 2-D assignment
method is used instead in order to handle the computational cost of MHT. The assignment based algorithm works
in a signal-level mode. That is, signals in receivers are first matched to different transmitters and, then, outputs
of matched filters are used to find the cost of each combination in the 2-D assignment method. The main benefit
of 2-D assignment is to easily incorporate new targets that are suitable for targets with multiple scatters where a
target may be otherwise unobservable in some pairs. Simulation results justify the capability of 2-D assignment
method in tackling multiple target localization problems, even in relatively low SNRs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the problem of tracking multiple targets in unknown clutter background using the Joint Integrated
Probabilistic Data Association (JIPDA) tracker and the Multiple Hypotheses Tracker (MHT) is studied. It is
common in real tracking problems to have little or no prior information on clutter background. Furthermore, the
clutter backgroundmay be dynamic and evolve with time. Thus, in order to get accurate tracking results, trackers
need to estimate parameters of clutter background in each sampling instant and use the estimate to improve
tracking. In this paper, incorporated with the JIPDA tracker or the MHT algorithm, a method based on Nonhomogeneous
Poisson point processes is proposed to estimate the intensity function of non-homogeneous clutter
background. In the proposed method, an approximated Bayesian estimate for the intensity of non-homogeneous
clutter is updated iteratively through the Normal-Wishart Mixture Probability Hypothesis Density (PHD) filter
technique. Then, the above clutter density estimate is used in the JIPDA algorithm and the MHT algorithm for
multitarget tracking. It is demonstrated thorough simulations that the proposed clutter background estimation
method improves the performance of the JIPDA tracker in unknown clutter background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have invented a new theory of exact particle flow for nonlinear filters. This
generalizes our theory of particle flow that is already many orders of magnitude faster than
standard particle filters and which is several orders of magnitude more accurate than the
extended Kalman filter for difficult nonlinear problems. The new theory generalizes our recent
log-homotopy particle flow filters in three ways: (1) the particle flow corresponds to the exact flow
of the conditional probability density; (2) roughly speaking, the old theory was based on
incompressible flow (like subsonic flight in air), whereas the new theory allows compressible flow
(like supersonic flight in air); (3) the old theory suffers from obstruction of particle flow as well as
singularities in the equations for flow, whereas the new theory has no obstructions and no
singularities. Moreover, our basic filter theory is a radical departure from all other particle filters in
three ways: (a) we do not use any proposal density; (b) we never resample; and (c) we compute
Bayes' rule by particle flow rather than as a point wise multiplication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We give a fresh perspective on research for nonlinear filters with particle flow.
Such research is an interesting mixture of theory and numerical experiments, as well as tradeoffs
in filter implementation using GPUs and fast approximate k-NN algorithms, and fast approximate
Poisson solvers. Our fundamental idea is to compute Bayes' rule using an ordinary differential
equation (ODE) rather than a pointwise multiplication; this solves the problem of particle
degeneracy. Our filter is many orders of magnitude faster than standard particle filters for high
dimensional problems, and it is several orders of magnitude more accurate than the EKF for
difficult nonlinear problems, including problems with multimodal densities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a study of the particle flow filter proposed by Daum and Huang has been conducted. It is discovered
that for certain initial conditions, the desired particle flow that brings one particle from a good location in the
prior distribution to a good location in the posterior distribution with an equal value does not exist. This explains
the phenomenon of outliers experienced by Daum and Huang. Several ways of dealing with the singularity of the
gradient have been discussed, including (1) not moving the particles without a flow solution, (2) stopping the
flow entirely when it approaches the singularity, and (3) stopping for one step and starting in the next. In each
case the resulting set of particles are examined, and it is doubtful that they form a valid set of samples for the
approximation of the desired posterior distribution. In the case of the last method (stop and go), the particles
mostly concentrate on the mode of the desired distribution (but they fail to represent the whole distribution),
which may explain the "success" reported in the literature so far. An established method of moving particles,
the well known Population Monte Carlo method, is briefly presented in this paper for ease of reference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, it has been shown that the continuous-discrete and continuous-continuous nonlinear filtering problems
can be formulated and solved in terms of Feynman path integrals. A physical and conceptual explanation of
the central results is presented. The major role played by such techniques in modern theoretical physics and
pure mathematics is briefly reviewed. Several advantages of the proposed formulation (over other approaches in
standard filtering theory literature) are discussed. Also clarified are the origin of some filtering theory results,
such as the Yau algorithm for continuous-continuous filtering, and the relation between certain nonlinear filering
systems and Euclidean quantum physics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fundamental solution for the continuous-time filtering problems can be expressed in terms of Feynman path
integrals. This enables one to view the solution of filtering problem in terms of an effective action that is a function
of the signal and measurement models. The practical utility of the path integral formula is demonstrated via
some nontrivial examples. Specifically, it is shown that the simplest approximation of the path integral formula
for the fundamental solution of the Fokker-Planck-Kolmogorov forward equation (termed the Dirac-Feynman
approximation) can be applied to solve nonlinear continuous-discrete filtering problems quite accurately using
sparse grid filtering and Monte-Carlo approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
An alternative approach to data association is analyzed. The technique, based on an automatic target recognition
scheme, uses an image correlation scheme that relies on the phase-only filter. The phase-only filter can compare tracklevel
data (or tagged data) over multiple scans simultaneously. The approach can also combine kinematic and attribute
data to be scored simultaneously. The technique requires that the track information be mapped into an image
representation, referred to as a tile. The generation of the tile can be based on an amplitude representation of the targettrack
variations. Alternatively, phase variations, rather than amplitude could be used to generate the tile. These tiles can
represent multiple track attributes over multiple reports. The capabilities of the phase-only filter correlation technique
are compared to the chi-squared metric standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The subject of traffic flow modeling began over fifty years ago when Lighthill and Whitham used flow continuity
equation from fluid dynamics to describe traffic behavior. Since then, a multitude of models, broadly classified into
macroscopic, mesoscopic, and microscopic models, has been developed. Macroscopic models describe the space-time
evolution of aggregate quantities such as traffic flow density whereas microscopic models describe behavior of
individual drivers/vehicles in the presence of other vehicles. In this paper, we consider tracking of vehicles using a
specific microscopic model known as the intelligent driver model (IDM). As in other microscopic models, the IDM
equations of motion of a vehicle are nonlinearly coupled to those of neighboring vehicles, with the magnitudes of
coupling terms becoming larger as vehicles get closer and smaller as vehicles get farther apart. In our approach, the state
of weakly coupled groups of vehicles is represented by separated probability distributions. When the vehicles move
closer to each other, the state is represented by a joint probability distribution that takes into account the interaction
among vehicles. We use a sum of Gaussians approach to represent the underlying interaction structure for state
estimation and reduce computational complexity. In this paper we describe our approach and illustrate the approach with
simulated examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In surveillance and reconnaissance applications, dynamic objects are dynamically followed by track filters with
sequential measurements. There are two popular implementations of tracking filters: one is the covariance or Kalman
filter and the other is the information filter. Evaluation of tracking filters is important in performance optimization not
only for tracking filter design but also for resource management. Typically, the information matrix is the inverse of the
covariance matrix. The covariance filter-based approaches attempt to minimize the covariance matrix-based scalar
indexes whereas the information filter-based methods aim at maximizing the information matrix-based scalar indexes.
Such scalar performance measures include the trace, determinant, norms (1-norm, 2-norm, infinite-norm, and Forbenius
norm), and eigenstructure of the covariance matrix or the information matrix and their variants. One natural question to
ask is if the scalar track filter performance measures applied to the covariance matrix are equivalent to those applied to
the information matrix? In this paper we show most of the scalar performance indexes are equivalent yet some are not.
As a result, the indexes if used improperly would provide an "optimized" solution but in the wrong sense relative to
track accuracy. The simulation indicated that all the seven indexes were successful when applied to the covariance
matrix. However, the failed indexes for the information filter include the trace and the four norms (as defined in
MATLAB) of the information matrix. Nevertheless, the determinant and the properly selected eigenvalue of the
information matrix were successful to select the optimal sensor update configuration. The evaluation analysis of track
measures can serve as a guideline to determine the suitability of performance measures for tracking filter design and
resource management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target
tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor
management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent
represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is
limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned
to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an
auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient
way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and
distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation
approach for distributed sensor management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coordination and deployment of multiple unmanned air vehicles (UAVs) requires a lot of human resources in order to
carry out a successful mission. The complexity of such a surveillance mission is significantly increased in the case of an
urban environment where targets can easily escape from the UAV's field of view (FOV) due to intervening building and
line-of-sight obstruction. In the proposed methodology, we focus on the control and coordination of multiple UAVs
having gimbaled video sensor onboard for tracking multiple targets in an urban environment. We developed optimal
path planning algorithms with emphasis on dynamic target prioritizations and persistent target updates. The command
center is responsible for target prioritization and autonomous control of multiple UAVs, enabling a single operator to
monitor and control a team of UAVs from a remote location. The results are obtained using extensive 3D simulations in
Google Earth using Tangent plus Lyapunov vector field guidance for target tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
This paper generalizes the PHD filter to the case of target-dependent clutter. It is assumed that a distinct a
priori Poisson clutter process is associated with each target. Multitarget calculus techniques are used to derive
formulas for the measurement-update step. These formulas require combinatorial sums over all partitions of the
current measurement-set. Further research is required to address the resulting computational issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional PHD and CPHD filters presume that the probability pD(x) that a measurement will
be collected from a target with state-vector x (the state-dependent probability of detection) is known a priori.
However, in many applications this presumption is false. A few methods have been devised for estimating the
probability of detection, but they typically presume that pD(x) is constant in both time and the region of interest.
This paper introduces CPHD/PHD filters that are capable of multitarget track-before-detect operation even when
probability of detection is not known and, moreover, when it is not necessarily constant, either temporally or
spatially. Furthermore, these filters are potentially computationally tractable. We begin by deriving CPHD/PHD
filter equations for the case when probability of detection is unknown but the clutter model is known a priori.
Then, building on the results of a companion paper, we note that CPHD/PHD filters can be derived for the case
when neither probability of detection or the background clutter are known.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A group target is a collection of individual targets which are, for example, part of a convoy of articulated vehicles
or a crowd of football supporters and can be represented mathematically as a spatial cluster process. The process
of detecting, tracking and identifying group targets requires the estimation of the evolution of such a dynamic
spatial cluster process in time based on a sequence of partial observation sets. A suitable generalisation of the
Bayes filter for this system would provide us with an optimal (but computationally intractable) estimate of a
multi-group multi-object state based on measurements received up to the current time-step. In this paper, we
derive the first-moment approximation of the multi-group multi-target Bayes filter, inspired by the first-moment
multi-object Bayes filter derived by Mahler. Such approximations are Bayes optimal and provide estimates for
the number of clusters (groups) and their positions in the group state-space, as well as estimates for the number
of cluster components (object targets) and their positions in target state-space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human
involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence
in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing
multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random
Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation
algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test
experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic sensor management of heterogeneous and distributed sensors presents a daunting theoretical and practical
challenge. We present a Situational Awareness Sensor Management (SA-SM) algorithm for the tracking of ground
targets moving on a road map. It is based on the previously developed information-theoretic Posterior Expected
Number of Targets of Interest (PENTI) objective function, and utilizes combined measurements form an airborne
GMTI radar, and a space-based EO/IR sensor. The resulting filtering methods and techniques are tested and evaluated.
Different scan rates for the GMTI radar and the EO/IR sensor are evaluated and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and tracking of collision events involving existing Low Earth Orbit (LEO) Resident Space Objects
(RSOs) is becoming increasingly important with the higher LEO space objects traffic volume which is anticipated to
increase even further in the near future. Changes in velocity that can lead to a collision are hard to detect early on time,
and before the collision happens. Several collision events can happen at the same time and continuous monitoring
of the LEO orbit is necessary in order to determine and implement collision avoidance strategies. We present a
simulation of a constellation system consisting of multiple platforms carrying EO/IR sensors for the detection of such
collisions. The presented simulation encompasses the full complexity of LEO trajectories changes which can collide
with currently operating satellites. Efficient multitarget filter with information-theoretic multisensor management is
implemented and evaluated on different constellations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparison of stochastic optimizers running inside a centralized sensor resource manager (SRM)
for scheduling the tasks (observations) of an ensemble of space observing kinematic sensors. The manager is designed to
operate as a receding horizon controller in a closed feedback loop with a linear filter based multiple hypothesis tracker
(MHT) that fuses the disparate sensor data to produce target declarations and state estimates. The reward function is
based on expected entropic information gain of satellite tracks over the planning horizon. A comparison between several
stochastic optimizers, namely: particle swarm optimizers (PSO), evolutionary algorithms (EA), and the simultaneous
perturbation and stochastic approximation (SPSA) algorithm is performed over the resulting high dimensional,
Markovian, and discontinuous reward function. The algorithms were evaluated by simulating space surveillance
scenarios using idealized optical sensors, satellite two-line element (TLE) sets from the US Space Track catalog, and
relevant factors such as line of sight visibility. Simulation results show a hybrid PSO and EA algorithm outperforms the
other algorithms over the tests performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
Gaussian mixture model (GMM) has been used in many applications for dynamic state estimation such as target tracking
or distributed fusion. However, the number of components in the mixture distribution tends to grow rapidly when
multiple GMMs are combined. In order to keep the computational complexity bounded, it is necessary to approximate a
Gaussian mixture by one with reduced number of components. Gaussian mixture reduction is traditionally conducted by
recursively selecting two components that appear to be most similar to each other and merging them. Different
definitions on similarity measure have been used in literature. For the case of one-dimensional Gaussian mixtures, Kmeans
algorithms and some variations are recently proposed to cluster Gaussian mixture components in groups, use a
center component to represent all in each group, readjust parameters in the center components, and finally perform
weight optimization. In this paper, we focus on multi-dimensional Gaussian mixture models. With a variety of reduction
algorithms and possible combinations, we developed a hybrid algorithm with constraint optimized weight adaptation to
minimize the integrated squared error (ISE). In additions, with extensive simulations, we showed that the proposed
algorithm provides an efficient and effective Gaussian mixture reduction performance in various random scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many applications require measuring the distance between mixture distributions. For example in the content-based
image retrieval (CBIR) systems and audio speech identification a distance measure between mixture models are often
required. This is also an important element for multisensor tracking and fusion where different types of state
representations employed by distributed agents need to be correlated. Various distance metrics have been developed to
serve this purpose. The performance of these metrics can be evaluated by comparing probabilities of correct correlation
verses false detection as a function of a pre-determined threshold on the calculated distance. In this paper, we compare
several distance metrics for mixtures distributions. Specifically, we focus on three such distance measures, namely the
Integral Square Error distance, the Bhattacharyya distance and the Kullback Leibler distance. To ensure that these
techniques can be applied for general distributions, not just for Gaussian mixture model (GMM), we use these
techniques in conjunction with a specific distance metric designed for mixture type, called general mixture distance
(GMD). For evaluation purpose, we use GMM in the simulation as a test example of mixture models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms (GAs) have been applied to many difficult optimization problems such as track assignment and
hypothesis managements for multisensor integration and data fusion. However, premature convergence has been a main
problem for GAs. In order to prevent premature convergence, we introduce an allied strategy based on biological
evolution and present a parallel Genetic Algorithm with the allied strategy (PGAAS). The PGAAS can prevent
premature convergence, increase the optimization speed, and has been successfully applied in a few applications. In this
paper, we first present a Markov chain model in the PGAAS. Based on this model, we analyze the convergence property
of PGAAS. We then present the proof of global convergence for the PGAAS algorithm. The experiments results show
that PGAAS is an efficient and effective parallel Genetic algorithm. Finally, we discuss several potential applications of
the proposed methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the assignment of assets to tasks where each asset can potentially execute any of the tasks,
but assets execute tasks with a probabilistic outcome of success. There is a cost associated with each possible
assignment of an asset to a task, and if a task is not executed there is also a cost associated with the nonexecution
of the task. Thus any assignment of assets to tasks will result in an expected overall cost which we
wish to minimise. We propose an approach based on the Random Neural Network (RNN) which is fast and of
low polynomial complexity. The evaluation indicates that the proposed RNN approach comes at most within
10% of the cost obtained by the optimal solution in all cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-class assignment is often used to aid in the exploitation of data in the Intelligence, Surveillance, and
Reconnaissance (ISR) community. For example, tracking systems collect detections into tracks and recognition systems
classify objects into various categories. The reliability of these systems is highly contingent upon the correctness of the
assignments. Conventional methods and metrics for evaluating assignment correctness only convey partial information
about the system performance and are usually tied to the specific type of system being evaluated. Recently, information
theory has been successfully applied to the tracking problem in order to develop an overall performance evaluation
metric. In this paper, the information-theoretic framework is extended to measure the overall performance of any multiclass
assignment system, specifically, any system that can be described using a confusion matrix. The performance is
evaluated based upon the amount of truth information captured and the amount of false information reported by the
system. The information content is quantified through conditional entropy and mutual information computations using
numerical estimates of the association probabilities. The end result is analogous to the Receiver Operating Characteristic
(ROC) curve used in signal detection theory. This paper compares these information quality metrics to existing metrics
and demonstrates how to apply these metrics to evaluate the performance of a recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic inference for hybrid Bayesian networks, which involves both discrete and continuous variables, has
been an important research topic over the recent years. This is not only because a number of efficient inference
algorithms have been developed and used maturely for simple types of networks such as pure discrete model, but
also for the practical needs that continuous variables are inevitable in modeling complex systems. Pearl's message
passing algorithm provides a simple framework to compute posterior distribution by propagating messages
between nodes and can provides exact answer for polytree models with pure discrete or continuous variables. In
addition, applying Pearl's message passing to network with loops usually converges and results in good approximation.
However, for hybrid model, there is a need of a general message passing algorithm between different
types of variables. In this paper, we develop a method called Direct Message Passing (DMP) for exchanging
messages between discrete and continuous variables. Based on Pearl's algorithm, we derive formulae to compute
messages for variables in various dependence relationships encoded in conditional probability distributions. Mixture
of Gaussian is used to represent continuous messages, with the number of mixture components up to the
size of the joint state space of all discrete parents. For polytree Conditional Linear Gaussian (CLG) Bayesian
network, DMP has the same computational requirements and can provide exact solution as the one obtained by
the Junction Tree (JT) algorithm. However, while JT can only work for the CLG model, DMP can be applied
for general nonlinear, non-Gaussian hybrid model to produce approximate solution using unscented transformation
and loopy propagation. Furthermore, we can scale the algorithm by restricting the number of mixture
components in the messages. Empirically, we found that the approximation errors are relatively small especially
for nodes that are far away from the discrete parent nodes. Numerical simulations show encouraging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications IV
A Classification system such as an Automatic Target Recognition (ATR) system with N possible output labels (or decisions)
will have N(N-1) possible errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all
of these errors. Finite truth data will produce an approximation to a ROC manifold. How well does the approximate
ROC manifold approximate the TRUE ROC manifold? Several metrics exist that quantify the approximation ability, but
researchers really wish to quantify the confidence in the approximate ROC manifold. This paper will review different
confidence definitions for ROC curves and will derive an expression for confidence of a ROC manifold. The foundation of
the confidence expression is based upon the Chebychev inequality..
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a data-driven approach to classification of Quartz Crystal Microbalance (QCM) sensor data. The sensor
is a nano-nose gas sensor that detects concentrations of analytes down to ppm levels using plasma polymorized
coatings. Each sensor experiment takes approximately one hour hence the number of available training data is
limited. We suggest a data-driven classification model which work from few examples. The paper compares
a number of data-driven classification and quantification schemes able to detect the gas and the concentration
level. The data-driven approaches are based on state-of-the-art machine learning methods and the Bayesian
learning paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this paper is to compare the performance of an algorithm employing the Integrated Ornstein-
Uhlenbeck process with a genetic algorithm based method for ship track modeling. The positional measurements,
received at irregular time intervals are assumed to have heteroscedastic and correlated errors and available
in batches. The quality of the produced tracks is assessed using several simulated scenarios and evaluated
statistically. The results of this performance evaluation are useful as they facilitate selecting the appropriate
approach to data processing in maritime surveillance applications, hence contribute to increased maritime domain
awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications V
Indoor localization is considered to be a key aspect of future context-aware, ubiquitous and pervasive systems, while
Wireless Sensor Networks (WSNs) are expected to constitute the critical infrastructure in order to sense and interact with
the environment surrounding them. In the context of developing ambient-assisted living and aftermath crisis mitigation
services, we are implementing WAX-ROOM, a WSN specially developed for indoor localization but at the same time
able to sense and interact with the environment. Currently, WAX-ROOM incorporates three different localization
techniques and an optimal fusion rule. The proposed WSN's architecture and advantages, as well as measurements
results regarding its performance in terms of localization accuracy are presented herein, demonstrating the eligibility of
the proposed platform for indoor localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fully functional, prototype night vision camera system is described which produces true-color imagery, using a
visible/near-infrared (VNIR) color EMCCD camera, fused with the output from a thermal long-wave infrared (LWIR)
microbolometer camera. The fusion is performed in a manner that displays the complimentary information from both
sources without destroying the true-color information. The system can run in true-color mode in day-light down to about
1/4-moon conditions, below this light level the system can function in a monochrome VNIR/LWIR fusion mode. An
embedded processor is used to perform the fusion in real-time at 30 frames/second and produces both digital and analog
color video outputs. The system can accommodate a variety of modifications to meet specific user needs, and various
additional fusion algorithms can be incorporated making the system a test-bed for real time fusion technology under a
variety of conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motivated by biologically-inspired architectures for video analysis and object recognition, a new single band
electro-optical (EO) object detector is described for aerial reconnaissance and surveillance applications. Our bio-inspired
target screener (BiTS) uses a bank of Gabor filters to compute a vector of texture features over a range of scales and
orientations. The filters are designed to exploit the spatial anisotropy of manmade objects relative to the background. The
background, which is assumed to be predominantly natural clutter, is modeled by its global mean and covariance. The
Mahalanobis distance measures deviations from the background model on a pixel-by-pixel basis. Possible manmade
objects occur at peaks in the distance image. We measured the performance of BiTS on a set of 100 ground-truthed
images taken under different operating conditions (resolution, sensor geometry, object spacings, background clutter, etc.)
and found its probability of detection (PD) was 12% higher than a RX anomaly detector, with half the number of false
alarms at a PD of 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic detection and classification of manmade objects in overhead imagery is key to generating
geospatial intelligence (GEOINT) from today's high space-time bandwidth sensors in a timely manner. A
flexible multi-stage object detection and classification capability known as the IMINT Data Conditioner
(IDC) has been developed that can exploit different kinds of imagery using a mission-specific processing
chain. A front-end data reader/tiler converts standard imagery products into a set of tiles for processing,
which facilitates parallel processing on multiprocessor/multithreaded systems. The first stage of processing
contains a suite of object detectors designed to exploit different sensor modalities that locate and chip out
candidate object regions. The second processing stage segments object regions, estimates their length, width,
and pose, and determines their geographic location. The third stage classifies detections into one of K
predetermined object classes (specified in a models file) plus clutter. Detections are scored based on their
salience, size/shape, and spatial-spectral properties. Detection reports can be output in a number of popular
formats including flat files, HTML web pages, and KML files for display in Google Maps or Google Earth.
Several examples illustrating the operation and performance of the IDC on Quickbird, GeoEye, and DCS
SAR imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi modal image registration is an especially difficult problem when one of the images contains only a small region of
the other image. In this case there is generally a need to search for the alignment parameters over the entire image. If an
associated local optimization scheme is initialized far from the global solution, it usually converges to a local optimum. This
paper presents a multi-scale approach that greatly enlarges the basin of attraction of the global optimum, and demonstrates
how to find the correct solution to the optimization problem efficiently and reliably, by solving a small number of local
optimization problems and tracking a small number of likely candidates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry
from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information
System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data
sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures
shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based
photo enhancement which are difficult to perform via conventional image processing: feature annotation and
image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D
image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future
real-time labeling of imagery shot in complex city environments by mobile smart phones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing applications typically parallelize well. This gives a developer interested in data throughput
several different implementation options, including multiprocessor machines, general purpose computation on
the graphics processor, and custom gate-array designs. Herein, we will investigate these first two options for
dictionary learning and sparse reconstruction, specifically focusing on the K-SVD algorithm for dictionary learning
and the Batch Orthogonal Matching Pursuit for sparse reconstruction. These methods have been shown to
provide state of the art results for image denoising, classification, and object recognition. We'll explore the GPU
implementation and show that GPUs are not significantly better or worse than CPUs for this application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planar surfaces are important characteristics in man-made environments.
Planes have many practical applications in computer vision and computer graphics,
including camera calibration and interactive modeling. Here, we develop a plane
detection method for piecewise image pair taken under urban environment. All
potential planes are detected based on planar homographies estimated from
Levenberg-Marquardt algorithm. In order to extract the whole planes, the
normalized cut method is used to segment the original images. We pick those
segmented regions with maximum fit to those features satisfied with planar
homographies as the whole planes. We illustrate the algorithm's performance on
gray and color image pairs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present methods for scene understanding, localization and classification of complex, visually
heterogeneous objects from overhead imagery. Key features of this work include: determining boundaries of objects
within large field-of-view images, classification of increasingly complex object classes through hierarchical
descriptions, and exploiting automatically extracted hypotheses about the surrounding region to improve classification
of a more localized region. Our system uses a principled probabilistic approach to classify increasingly
larger and more complex regions, and then iteratively uses this automatically determined contextual information
to reduce false alarms and misclassifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conspicuity of different targets in image sequences taken by approaching sensors is addressed in applications such
as the assessment of camouflage effectiveness or the performance evaluation of autonomous systems. In such evaluation
processes the consideration of background characteristics is essential due to the propensity to confuse target and
background signatures. Several discriminating features of target and background signature can be derived. Furthermore,
the changing aspect and spatial resolution during an approach on a target have to be taken into account.
Considering salient points in image sequences, we perform a nominal/actual value comparison by evaluating the receiver
operating characteristic (ROC) curve for the detections in each image. Hence, reference regions for targets and
backgrounds are provided for the entire image sequence by means of robust image registration. The consideration of the
uncertainty for the temporal progression of the ROC curve enables hypothesis testing for well-founded statements about
the significance of the determined distinctiveness of targets with respect to their background. The approach is neither
restricted to images taken by IR sensors nor applicable to low level image analysis steps only, but can be considered as a
general method for the assessment of feature evaluation and target distinctiveness.
The analysis method proposed facilitates an objective comparison of object appearance with both, its relevant
background and other targets, using different image analysis features. The feasibility and the usefulness of the approach
are demonstrated with real data recorded with a FLIR sensor during a field trial on a bare and mock-up target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time Unmanned Arial Vehicle (UAV) image registration is achieved by stimulating one eye
with a live video image from a flying UAV while stimulating the other eye with calculated images. The
calculated image is initialized by telemetry signals from the UAV and corrected using the Perspective View
Nascent Technology (PVNT) software package model-image feedback algorithm. Live and registered
calculated images are superimposed allowing command functions including target geo-location, UAV
sensor slewing, tracking, and way point flight control. When the same equipment is used with the naked
eye the forward observer function can be implemented to produce accurate target coordinates.
The paper will then discuss UAV mission control and forward observer target tracking
experiments conducted at Camp Roberts, California.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human motion in visual and long-wave infrared video imagery is investigated. A simple moving target
tracker is used to segment out the subject of interest in a video sequence. Pixel level changes of the subject's
size and position within the image are then used to form a pair of signals. Standard techniques in signal
processing are then applied to find features of interest. The long term goals of the work are to find a means
for associating tracks of humans in optical video data with poorly resolved micro-Doppler RF signals, and to
determine intent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holger Jaenisch, James Handley, Nathaniel Albritton, John Koegler, Steven Murray, Willie Maddox, Stephen Moren, Tom Alexander, William Fieselman, et al.
We present a method for deriving an automatic target recognition (ATR) system using geospatial features and a Data
Model populated decision architecture in the form of a self-organizing knowledge base. The goal is to derive an ATR
that recognizes targets it has seen before while minimizing false alarms (zero false alarms). We present an investigation
of the performance of analytical Data Models as a sensor and data fusion process for automatic target recognition (ATR),
and summarize results including on a 2 km background run where no false alarms were encountered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe COGNIVA, a closed-loop Cognitive-Neural method and system for image and video analysis
that combines recent technological breakthroughs in bio-vision cognitive algorithms and neural signatures of human
visual processing. COGNIVA is an "operational neuroscience" framework for intelligent and rapid search and
categorization of Items Of Interest (IOI) in imagery and video. The IOI could be a single object, group of objects,
specific image regions, specific spatio-temporal pattern/sequence or even the category that the image itself belongs to
(e.g., vehicle or non-vehicle). There are two main types of approach for rapid search and categorization of IOI in
imagery and video. The first approach uses conventional machine vision or bio-inspired cognitive algorithms. These
usually need a predefined set of IOI and suffer from high false alarm rates. The second class of algorithms is based on
neural signatures of target detection. These algorithms usually break the entire image into sub-images and process EEG
data from these images and classify them based on it. This approach may suffer from high false alarms and is slow
because the entire image is chipped and presented to the human observer. The proposed COGNIVA overcomes the
limitations of both methods by combining them resulting in a low false alarm rate and high detection with high
throughput making it applicable to both image and video analysis. In the most basic form, COGNIVA first uses bioinspired
cognitive algorithms for deciding potential IOI in a sequence of images/video. These potential IOI are then
shown to a human and neural signatures of visual detection of IOI are collected and processed. The resulting signatures
are used to categorize and provide final IOI. We will present the concept and typical results of COGNIVA for detecting
Items of interest in image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this conference last year, we proposed free-space gratings, Fizeau interferometers and wavefront estimation for
detecting the different lasers deployed in the battlefield for range finding, target designation, communications,
dazzle, location of targets, munitions guidance, and destruction. Since last year, advanced laser weapons of the
electron cyclotron type, are in development, such as the free-electron laser, that are tunable and can, unlike
conventional bound-electron state lasers, be used at any wavelength from microwaves to soft X-rays. We list the
characteristics of the nine dominant laser weapons because we assume that the free-electron lasers will initially
use one of the current threat wavelengths because of availability of components and instrumentation. In this
paper we replace the free-space grating with a higher performing array waveguide grating integrated optic chip,
similar to that used in telecommunications, because integrated circuits are more robust and less expensive. It
consists of a star coupler that fans out amongst waveguides of different length followed by a star coupler that
focuses different wavelengths to different outputs in order to separate them. Design equations are derived to
cover a range of frequencies at specific frequency spacing relevant to this application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of through the wall radar imaging when no a priori knowledge about the image
statistics is available. The approach presented in this paper allows for automatic target detection. It integrates
independent component analysis to reduce clutter and noise artifacts, spatial features, and histogram based
thresholding. The proposed detection schemes are evaluated using real data and synthesized data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of waveform design for multiple-input multiple-output (MIMO) radar systems employing the
generalized detector that is constructed based on the generalized approach to signal processing in noise. We investigate
the case of an extended target and without limiting ourselves to orthogonal waveforms. Instead, we develop a procedure
to design the optimal waveform that maximizes the signal-to-interference plus-noise ratio (SINR) at the generalized detector
output. The optimal waveform requires a knowledge of both target and clutter statistics. We also develop several
suboptimal waveforms requiring knowledge of target statistics only, clutter statistics only, or both. Thus, the transmit
waveforms are adjusted based on target and clutter statistics. A model for the radar returns that incorporates the transmit
waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms
are derived for designing the transmit waveforms under different assumptions regarding the statistical information available
to the generalized detector. The performance of these algorithms is illustrated by computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An analysis was performed, using MODTRAN, to determine the best filters to use for detecting the
muzzle flash of an AK-47 in daylight conditions in the desert. Filters with bandwidths of 0.05, 0.1,
0.5, 1.0, 3.0, and 5.0 nanometers (nm) were analyzed to understand how the optical bandwidth affects
the signal-to-solar clutter ratio. These filters were evaluated near the potassium D1 and D2 doublet
emission lines that occur at 769.89 and 766.49 nm respectively that are observed where projectile
propellants are used. The maximum spectral radiance, from the AK-47 muzzle flash, is 1.88 x 10-2
W/cm2 str micron, and is approximately equal to the daytime atmospheric spectral radiance. The
increased emission, due to the potassium doublet lines, and decreased atmospheric transmission, due to
oxygen absorption, combine to create a condition where the signal-to-solar clutter ratio is greater than
1. The 3 nm filter, has a signal-to-solar clutter ratio of 2.09 when centered at 765.37 nm and provides
the best combination of both cost and signal sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are advantages of using the motion vector obtained from the MPEG video coding to perform target of interest
identification in the field. In practice, however, environment noise, time-varying, and uncertainty factors affect their
performance reliably and accurately detecting targets of interest. In this paper, we proposed a novel low rank filtering
based on L1 norm in order to straighten up single rogue or outliers that might show up fairly often. Finally, a simple
average smoothing filter was applied to reduce vector quantization noise. By using the low rank filtering based on L1
norm, the dominant motion vectors from the MPEG video coding can be extracted appropriately with respect to target
operational responses and can be used for robust identification of moving target. The performance of the proposed
approach was evaluated based on a set of experimental camera motion. The motions, including pan, tilt, and zoom, was
computed from the motion vectors, and the residual vectors which are not described by the camera motion are regarded
as generated by moving blobs. Events, as a result, can be detected from these moving blobs. It is demonstrated that the
approach yields very promising results where motion vectors obtained from the MPEG video coding can be used
efficiently to detect and identify moving target in the field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Removing noise in real time has become a high priority for analyzing data corrupted by additive noise. It is a major
problem in various applications such as speech, image processing and real time multimedia services. Although
considerable interest has arisen in recent years regarding wavelets as a new transform technique for many applications,
the linear adaptive decomposition transform (LDT) has yielded results superior to the discrete wavelet transform (DWT)
not only in terms of using a lower number of decomposition levels but also achieving a smaller percentage normalized
approximation error in the reconstructed signal. In this paper, a novel noise reduction method, based on a modified
noncausal smoothing filter and low rank approximation based upon the sum of minimum magnitude error criterion (i.e.,
l1 norm) is introduced that distinguishes itself from these other methods. The performance of the proposed approach was
evaluated based on one dimensional data sets as well as speech samples. It is demonstrated that the approach yields very
promising results on the test signals of the Donoho and Johnstone as well as to speech signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A proper conflict measure is the premise of handling the conflict between pieces of evidence properly. All the existing
conflict measures do not consider the potential conflict, which cannot be neglected in many cases. After analyzing the
potential conflict that was proposed qualitatively by Daniel [15], this paper first defines the local potential conflict
quantitatively. The local conflicting mass is the special case of local potential conflict. Then the concept of generalized
evidence conflict is proposed. The generalized evidence conflict is the sum of all the local potential conflicts and the
local conflicting masses. Generalized evidence conflict is more comprehensive and can measure the conflict among more
than two pieces of evidence. When there is no potential conflict, it becomes the same as Shafer's conflicting mass.
Numerical simulations demonstrate this measure's rationality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.