PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9091, including the Title Page, Copyright information, Table of Contents, Introduction, Invited Panel Discussion, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
In this paper, data obtained from wireless unattended ground sensor network are used for tracking multiple ground targets (vehicles, pedestrians and animals) moving on and off the road network. The goal of the study is to evaluate several data fusion algorithms to select the best approach to establish the tactical situational awareness. The ground sensor network is composed of heterogeneous sensors (optronic, radar, seismic, acoustic, magnetic sensors) and data fusion nodes. The fusion nodes are small hardware platforms placed on the surveillance area that communicate together. In order to satisfy operational needs and the limited communication bandwidth between the nodes, we study several data fusion algorithms to track and classify targets in real time. A multiple targets tracking (MTT) algorithm is integrated in each data fusion node taking into account embedded constraint. The choice of the MTT algorithm is motivated by the limit of the chosen technology. In the fusion nodes, the distributed MTT algorithm exploits the road network information in order to constrain the multiple dynamic models. Then, a variable structure interacting multiple model (VS-IMM) is adapted with the road network topology. This algorithm is well-known in centralized architecture, but it implies a modification of other data fusion algorithms to preserve the performances of the tracking under constraints. Based on such VS-IMM MTT algorithm, we adapt classical data fusion techniques to make it working in three architectures: centralized, distributed and hierarchical. The sensors measurements are considered asynchronous, but the fusion steps are synchronized on all sensors. Performances of data fusion algorithms are evaluated using simulated data and also validated on real data. The scenarios under analysis contain multiple targets with close and crossing trajectories involving data association uncertainties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods.
Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern
recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern
recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the
terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification
Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an
example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is
needed to determine identification as classification alone does not allow determining identification or intent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving emitters in dense environments present challenges for conventional, single-INT, SIGINT-based location
estimation. The emergence of wide field of view, high resolution, persistent EO imaging from airborne sensors
introduces the possibility of a multi-INT approach via video-SIGINT data fusion. Video-based object extraction
techniques can identify moving objects with very high spatial precision, precision that can be leveraged to locate moving
emitters if a means of associating extracted movers to SIGINT observations can be demonstrated. To examine the
feasibility of improving SIGINT location estimates in this manner, we conducted a simulation study in which we
correlated simulated video tracks and SIGINT observations. In this study, we generated simulated vehicle movement
over a road network under varying levels of mover density. Simulated SIGINT was generated via a conventional multicollector
location estimation approach under varying levels of SIGINT processing noise level. Association of the
simulated SIGINT to the video tracks was performed via a fusion algorithm that used a physical model to re-process the
SIGINT observables under constraints derived from the video tracks. Our results suggest that with only a few SIGINT
observations from a given moving emitter, the associated mover can be identified at a low error rate, even under levels
of processing noise that would result in extremely high levels of location estimate uncertainty, suggesting the potential
utility of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Occlusions can degrade object tracking performance in sensor imaging systems. This paper describes a robust approach
to object tracking that fuses video frames with RF data in a Bayes-optimal way to overcome occlusion. We fuse data
from these heterogeneous sensors, and show how our approach enables tracking when each modality cannot track
individually. We provide the mathematical framework for our approach, details about sensor operation, and a
description of a multisensor detection and tracking experiment that fuses real collected image data with radar data.
Finally, we illustrate two benefits of fusion: improved track hold during occlusion and diminished error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
Long-term tracking is important for maritime situational awareness to identify currently observed ships as earlier encounters. In cases of, for example, piracy and smuggling, past location and behavior analysis are useful to determine whether a ship is of interest. Furthermore, it is beneficial to make this assessment with sensors (such as cameras) at a distance, to avoid costs of bringing an own asset closer to the ship for verification. The emphasis of the research presented in this paper, is on the use of several feature extraction and matching methods for recognizing ships from electro-optical imagery within different categories of vessels. We compared central moments, SIFT with localization and SIFT with Fisher Vectors. From the evaluation on imagery of ships, an indication of discriminative power is obtained between and within different categories of ships. This is used to assess the usefulness in persistent tracking, from short intervals (track improvement) to larger intervals (re-identifying ships). The result of this assessment on real data is used in a simulation environment to determine how track continuity is improved. The simulations showed that even limited recognition will improve tracking, connecting both tracks at short intervals as well as over several days.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a detection and tracking algorithm for panoramic imaging systems intended
for operations in high-clutter environments. The algorithm combines correlation- and model-based
tracking in a manner that is robust to occluding objects but without the need for a separate
collision prediction module. Large data rates associated with the panoramic imager necessitate
the use of parallel computation on graphics processing units. We discuss the queuing and
tracking algorithms as well as practical considerations required for real-time implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The magnetic signal from a ferromagnetic object at a large distance can be modelled as that of a magnetic dipole. In many applications, the inverse problem of estimating the position, velocity and magnetic moment vector is of interest. Given the relationship between these parameters and the magnetic signature, this can be formulated as a Bayesian state estimation problem. In this paper, the extended Kalman filter is used to estimate the kinematic states as well as the dipole moment, given the total field measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications I
The “clutter-agnostic” CPHD filter was introduced at the 2010 SPIE Defense, Security and Sensing Symposium in 2010, and has been investigated in subsequent papers. This “κ-CPHD filter” was capable of multitarget detection and tracking in unknown, dynamically changing clutter backgrounds. It was also capable of estimating the entire intensity function of the clutter process. The purpose of this paper is to introduce a generalization of this filter. The generalized κ-CPHD filter has the following two major improvements: (1) a formula for the probability distribution on the number of targets (target cardinality distribution); and (2) a formula for the probability distribution of the number of clutter measurements (clutter cardinality distribution). More generally, the entire probability distribution of the clutter process can be estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The “background-agnostic” CPHD filter was introduced at the 2010 SPIE Defense, Security and Sensing Symposium in 2010. It is a CPHD filter that is capable of operation when both the clutter background and the target-detection profile are unknown and dynamically changing. These CPHD filters are also capable of on-the-fly estimation of the intensity function and cardinality distribution of the clutter process. Leveraging ideas of Ristic, Clark, and Vo, this paper describes a generalization of the background-agnostic CPHD filter that can also estimate the intensity function and cardinality distribution of the target-appearance process. The probability distributions of the clutter and target-birth processes can also be estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose, for the super-positional sensor scenario, a hybrid between the multi-Bernoulli filter and the cardinalized probability hypothesis density (CPHD) filter. We use a multi-Bernoulli random finite set (RFS) to model existing targets and we use an independent and identically distributed cluster (IIDC) RFS to model newborn targets and targets with low probability of existence. Our main contributions are providing the update equations of the hybrid filter and identifying computationally tractable approximations. We achieve this by defining conditional probability hypothesis densities (PHDs), where the conditioning is on one of the targets having a specified state. The filter performs an approximate Bayes update of the conditional PHDs. In parallel, we perform a cardinality update of the IIDC RFS component in order to estimate the number of newborn targets. We provide an auxiliary particle filter based implementation of the proposed filter and compare it with CPHD and multi-Bernoulli filters in a simulated multitarget tracking application
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the limits of performance for an ensemble of cooperating, mobile sensing agents executing an undersea surveillance mission. The objective of the multi-agent ensemble is to minimize uncertainty concerning the presence and location of targets as the multi-target system evolves over time. Each agent is capable of sensing, communicating with other agents, processing data to infer states of interest (fusion), and deciding on and executing motion commands. Each agent continually executes a perception-action cycle in which it fuses information to determine its best estimate of the multi-target system state and decides on its next (and possibly future) motion action(s) to optimize a criterion related to its entropic state (quantification of information gain or loss).
Each agent's perception of the states of interest is derived from measurements captured by its own sensor(s) and information communicated by other agents. Each agent's decisions are based on its estimates of the multi target system state, its entropic state, and its predictions of peer agent actions. The multi-agent cooperative decision making can be modeled as a cyclic optimization whereby the joint decision vector is optimized by sequentially optimizing each individual agent's decision vector while holding the others fixed. Moreover, the problem is a cyclic stochastic optimization (CSO) whereby only noisy measurements of the objective function are available to each agent. Preliminary theoretical results have recently emerged regarding convergence conditions and sub-optimality for CSO. This paper examines the implications and applicability of CSO convergence and sub-optimality via simulation- based experiments in the context of a cooperating multi-agent ensemble of undersea sensing agents searching a region for new targets and maintaining track on all discovered targets. Simulation results indicate that the theoretical results provide useful guidance on predicting the empirically observed limits of performance of the multi-agent ensemble.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of imaging data with auxiliary signal such as EW data for multitarget classification poses daunting theoretical and practical challenges. The problem is exacerbated by issues such as asynchronous data flow, uneven feature quality and object occlusion. In our approach, we assign prior probabilities to image and signal feature elements to handle those practical issues in a unified manner. Current state and class probability distributions estimated from previous instances are fused with new outputs from individual classifiers immediate after the outputs become available to establish updated state and class probability distributions in a Bayesian framework. Results are presented that demonstrate joint segmentation and tracking, target classification using imaging data, and fusion of imaging data with noisy and asynchronous auxiliary EW information under realistic simulation scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mitigation of possible collision threats to current and future operations in space environments is an important an challenging task considering high nonlinearity of orbital dynamics and discrete measurement updates. Such discrete observations are relatively scarce with respect to space dynamics including possible unintentional or intentional rocket propulsion based maneuvers even in scenarios when measurement collections are focused to a one single target of interest. In our paper, this problem is addressed in terms of multihypothesis and multimodel estimation in conjunction with multi-agent multigoal game theoretic guaranteed evasion strategies. Collision threat estimation is formulated using conditional probabilities of time dependent hypotheses and spacecraft controls which are computed using Liapunov-like approach. Based on this formulation, time dependent functional forms of multi-objective utility functions are derived given threat collision risk levels. For demonstrating developed concepts, numerical methods are developed using nonlinear filtering methodology for updating hypothesis sets and corresponding conditional probabilities. Space platform associated sensor resources are managed using previously developed and demonstrated information-theoretic objective functions and optimization methods. Consequently, estimation and numerical methods are evaluated and demonstrated on a realistic Low Earth Orbit collision encounter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications II
We describe many new ideas for research in particle flow corresponding to Bayes’ rule. Some of these ideas were inspired by quantum field theory and classical electromagnetism whereas others were taken from mathematics or signal processing. For example, we discuss renormalization group flow, Ricci flow, Yang-Mills equations, the Dirac approximation, Kronecker product expansions to estimate the covariance matrix, charge quantization, etc. Three new algorithms for particle flow inspired by renormalization group flow are derived in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supersymmetry (SUSY), or Bose-Fermi symmetry, is an attempt to provide a unified description of all of the fundamental interactions. Although originally introduced in the quantum field theory context, it was noted by Witten that the understanding of certain features of SUSY was simpler in the quantum mechanical setting. Some aspects of the vast subject of supersymmetric quantum mechanics, and the supersymmetric Fokker-Planck equation, of relavance to continuous-time nonlinear filtering theory are briefly reviewed. The applicability of certain remarkable results in supersymmetric quantum mechanics to nonlinear filtering is noted and illustrated with a few examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The continuous-time nonlinear filtering problem involves the solution of a partial differential equation. In a recent paper, it was shown that supersymmetry enables one to obtain the exact fundamental solution of such PDEs using operator methods. In this paper, Feynman path integral methods are utilized. It is shown that such methods enable one to study higher dimensional nonlinear filtering problems more elegantly and systematically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern
recognition. Additional graphical methods include network analysis for social, communications, and sensor
management. With the growing availability of various data modalities, graphical fusion methods are widely used to
combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we
address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that
combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing
Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the
separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection is limited based on a specific sensors capability; however, the combination of multiple sensors will
improve the confidence of target detection. Confidence of detection, tracking and identifying a target in a multi-sensor
environment depends on intrinsic and extrinsic sensor qualities, e.g. target geo-location registration, and environmental
conditions 1. Determination of the optimal sensors and classification algorithms, required to assist in specific target
detection, has largely been accomplished with empirical experimentation. Formulation of a multi-sensor effectiveness
metric (MuSEM) for sensor combinations is presented in this paper. Leveraging one or a combination of sensors should
provide a higher confidence of target classification. This metric incorporates the Dempster-Shafer Theory for decision
analysis. MuSEM is defined for weakly labeled multimodal data and is modeled and trained with empirical fused sensor
detections; this metric is compared to Boolean algebra algorithms from decision fusion research. Multiple sensor
specific classifiers are compared and fused to characterize sensor detection models and the likelihood functions of the
models. For area under the curve (AUC), MuSEM attained values as high as .97 with an average difference of 5.33%
between Boolean fusion rules. Data was collected from the Air Force Research Lab’s Minor Area Motion Imagery
(MAMI) project. This metric is efficient and effective, providing a confidence of target classification based on sensor
combinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications III
Recent technological evolutions and developments allow gathering huge amounts of data stemmed from different types of sensors, social networks, intelligence reports, distributed databases, etc. Data quantity and heterogeneity imposed the evolution necessity of the information systems. Nowadays the information systems are based on complex information processing techniques at multiple processing stages. Unfortunately, possessing large quantities of data and being able to implement complex algorithms do not guarantee that the extracted information will be of good quality. The decision-makers need good quality information in the process of decision-making. We insist that for a decision-maker the information and the information quality, viewed as a meta-information, are of great importance. A system not proposing to its user the information quality is in danger of not being correctly used or in more dramatic cases not to be used at all. In literature, especially in organizations management and in information retrieval, can be found some information quality evaluation methodologies. But none of these do not allow the information quality evaluation in complex and changing environments. We propose a new information quality methodology capable of estimating the information quality dynamically with data changes and/or with the information system inner changes. Our methodology is able to instantaneously update the system's output quality. For capturing the information quality changes through the system, we introduce the notion of quality transfer function. It is equivalent to the signal processing transfer function but working on the quality level. The quality transfer function describes the influence of a processing module over the information quality. We also present two different views over the notion of information quality: a global one, characterizing the entire system and a local one, for each processing module.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods
include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video
Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message
and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced
multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video
(FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge
extracted from various sources, including entities, relations, and events, intelligence analysts respond to
commander’s information requests, integrate facts into summaries about current situations, augment existing
knowledge with inferred information, make predictions about the future, and develop action plans. However,
information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple
sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could
be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source
data processing, where there is much greater presence of intentional misinformation.
In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is
based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and
anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using
soft co-reference between entities and events in multiple sources. The conflict detection process maintains the
uncertainty throughout all phases, providing full traceability and enabling incremental updates of the
detection results as new knowledge or modification to previously analyzed information are obtained. Detected
conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset,
our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82%
recall and 90% precision in realistic noise situation (15% of missing attributes).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today’s operators face a “double whammy” – the need to process increasing amounts of information, including
“Twitter-INT”1 (social information such as Facebook, You-Tube videos, blogs, Twitter) as well as the need to discern
threat signatures in new security environments, including those in which the airspace is contested. To do this will
require the Air Force to “fuse and leverage its vast capabilities in new ways.”2 For starters, the integration of quantitative
and qualitative information must be done in a way that preserves important contextual information since the goal
increasingly is to identify and mitigate violence before it occurs. To do so requires a more nuanced understanding of the
environment being sensed, including the human environment, ideally from the “emic” perspective; that is, from the
perspective of that individual or group. This requires not only data and information that informs the understanding of
how the individuals and/or groups see themselves and others (social identity) but also information on how that identity
filters information in their environment which, in turn, shapes their behaviors.3 The goal is to piece together the
individual and/or collective narratives regarding threat, the threat narrative, from various sources of information. Is there
a threat? If so, what is it? What is motivating the threat? What is the intent of those who pose the threat and what are
their capabilities and their vulnerabilities?4 This paper will describe preliminary investigations regarding the application
of prototype hybrid information fusion method based on the threat narrative framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications IV
Understanding of Group Activities (GA) has significant applications in civilian and military domains. The process of
understanding GA is typically involved with spatiotemporal analysis of multi-modality sensor data. Video imagery is
one popular sensing modality that offers rich data, however, data associated with imagery source may become
fragmented and discontinued due to a number of reasons (e.g., data transmission, or observation obstructions and
occlusions). However, making sense out of video imagery is a real challenge. It requires a proper inference working
model capable of analyzing video imagery frame by frame, extract and inference spatiotemporal information pertaining
to observations while developing an incremental perception of the GA as they emerge overtime. In this paper, we
propose an ontology based GA recognition where three inference Hidden Markov Models (HMM’s) are used for
predicting group activities taking place in outdoor environments and different task operational taxonomy. The three
competing models include: a concatenated HMM, a cascaded HMM, and a context-based HMM. The proposed ontology
based GA-HMM was subjected to set of semantically annotated visual observations from outdoor group activity
experiments. Experimental results from GA-HMM are presented with technical discussions on design of each model
and their potential implication to Persistent Surveillance Systems (PSS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present work is part of an ongoing larger project.2, 3, 11, 12 The goal of this project is to develop a system
capable of automatic threat assessment for instances of firearms use in public places. The main components
of the system are: an ontology of firearms;1, 14 algorithms to create the visual footprint of the firearms,1, 14 to
compare visual information,2, 3, 11, 12 to facilitate search in the ontology, and to generate the links between the
conceptual and visual ontologies; as well as a formula to calculate the threat of individual firearms, firearms
classes, and ammunition types in different environments.
One part of the dual-level ontology for the properties of the firearms captures key visual features used to
identify their type or class in images, while the other part captures their threat-relevant conceptual properties.
The visual ontology is the result of image segmentation and matching methods, while the conceptual ontology
is designed using knowledge-engineering principles and populated semi-automatically from Web resources.
The focus of the present paper is two-fold. On the one hand, we will report on an update of the initial
threat formula, based on the substantially increased population of the firearm ontology, including ammunition
types and comparisons to actual incidents, and allowing for an overall more accurate assessment. On the other
hand, the linking algorithms between the visual and conceptual ontologies are elaborated for faster transfer of
information leading to an improvement in accuracy of the threat assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a dynamic DBSCAN-based method to cluster and visualize unclassified and potential dangerous obstacles in data sets recorded by a LiDAR sensor. The sensor delivers data sets in a short time interval, so a spatial superposition of multiple data sets is created. We use this superposition to create clusters incrementally. Knowledge about the position and size of each cluster is used to fuse clusters and the stabilization of clusters within multiple time frames. Cluster stability is a key feature to provide a smooth and un-distracting visualization for the pilot. Only a few lines are indicating the position of threatening unclassified points, where a hazardous situation for the helicopter could happen, if it comes too close. Clustering and visualization form a part of an entire synthetic vision processing chain, in which the LiDAR points support the generation of a real-time synthetic view of the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysts who are using visualization methods for big data concept exploration increasingly expect to comprehend more
distinct relationships and prominent concepts in support of their hypotheses or decisions. To expedite this knowledge
discovery process, Vector Space Modeling (VSM) in conjunction with probabilistic analysis enables rapid knowledgebased
relationship discovery while allowing for exploration of multi-embedded concepts than otherwise it is difficult to
perceive. In this paper, we present a technique for intrinsic ontology concepts similarity matching based on VSM for
exploitation and knowledge discovery from multimodality sensors metadata generated in Persistent Surveillance
Systems (PSS). To reduce data dimensionality, Principal Component Analysis (PCA) and Latent Dirichlet Allocation
(LDA) is applied to arrive at more abstract concepts. The proposed technique is able to reveal intrinsic concept
relationships from multi-dimensional metadata structures. Experimental results demonstrate effectiveness of this
approach for analytical ontological patterns exploitation. In this paper, the expediency of this technique for Visual
Analytics application is demonstrated. The result indicates that the newly developed system can significantly enhance
situation awareness and expedite actionable decision making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The axiomatic development of information theory, during the 1960's, led to the discovery of various composition laws. The Wiener-Shannon law is well understood, but the Inf law holds particular interest because it creates a connection with the Dempster-Shafer theory. Proceeding along these lines, in a previous paper, I demonstrated the connection between the Dempster-Shafer theory and Information theory. In 1954, Gustave Choquet developed the theory of capacities in connection with potential theory. The basic concepts of capacity theory arise from electrostatics, but a capacity is a generalization of the concept of measure in Analysis. It is well known that Belief and Plausibility in the Dempster-Shafer theory are Choquet capacities. However, it is not well known that the inverse of an information measure is a Choquet capacity. The objective of this paper is to demonstrate the connections among the Dempster- Shafer theory, Information theory and Choquet's theory of capacities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications I
Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image/feature fusion does not benefit much (or at all) for performance improvement which may imply that the power of image/feature/score fusion shall not be doubly applied to a face recognition process pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face
databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the
database where each vector component represents the likelihood of participation of a given face to each cluster. This
vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics.
The first stage in this system involves a clustering method which evaluates and compares the clustering results of five
different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the
best strategy for each data collection. In this paper we present the comparative performance of clustering results of
DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D
samples, and on actual face images from various databases, using four different standard metrics. These metrics are the
silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each
clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms
when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have
values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results),
then it is necessary for the clustering results to be verified by the other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the
military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible
face recognition is a highly challenging problem, due to the large modality gap between thermal and visible
imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning
(MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or
blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial
subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block
and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased
approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier
for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM
models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while
thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved
Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65
subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal
kernel weighting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications II
A method for fusing imagery from mobile devices with map data in real time is described. A camera model for iOS devices equipped with a camera, GPS, and compass is developed. The parameters of the camera model are determined from information supplied by the device’s on board sensors. The camera model projects photo and video data into the ground plane so they can be combined and exploited with map data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore the potential on-line adjustment of sensory controls for improved object identification and discrimination
in the context of a simulated high resolution camera system carried onboard a maneuverable robotic
platform that can actively choose its observational position and pose. Our early numerical studies suggest the
significant efficacy and enhanced performance achieved by even very simple feedback-driven iteration of the view
in contrast to identification from a fixed pose, uninformed by any active adaptation. Specifically, we contrast the
discriminative performance of the same conventional classification system when informed by: a random glance
at a vehicle; two random glances at a vehicle; or a random glance followed by a guided second look. After
each glance, edge detection algorithms isolate the most salient features of the image and template matching
is performed through the use of the Hausdor↵ distance, comparing the simulated sensed images with reference
images of the vehicles. We present initial simulation statistics that overwhelmingly favor the third scenario.
We conclude with a sketch of our near-future steps in this study that will entail: the incorporation of more
sophisticated image processing and template matching algorithms; more complex discrimination tasks such as
distinguishing between two similar vehicles or vehicles in motion; more realistic models of the observers mobility
including platform dynamics and eventually environmental constraints; and expanding the sensing task beyond
the identification of a specified object selected from a pre-defined library of alternatives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the recent influx of inexpensive depth sensors such as the Microsoft Kinect, systems for 3D reconstruction and visual odometry utilizing depth information have garnered new interest. Often these processes are highly parallel and can be realized in real-time through parallel computing architectures for the GPU. We represent fused depth maps as a Truncated Signed Distance Function (TSDF) which is a grid of voxels that contain the distance to the nearest surface. Point clouds of subsequent captures are aligned to the TSDF volume through error minimization, taking advantage of the fact that surfaces are implicitly defined where the distance is zero. We present a new method for minimizing the error based on absolute orientation that does not require linearization or a weighting function. To evaluate the proposed method we compare the number of iterations required for convergence, time per iteration, and final alignment error with existing Gauss-Newton nonlinear minimization methods. While we use the Microsoft Kinect due to its fused depth and color capabilities, the alignment only requires depth and is applicable to current active fused lidar systems with VNIR, SWIR, MWIR or LWIR sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding of group activity based on analysis of spatiotemporally correlated acoustic sound events has received a
minimum attention in the literature and hence is not well understood. Identification of group sub-activities such as:
Human-Vehicle Interactions (HVI), Human-Object Interactions (HOI), and Human-Human Interactions (HHI) can
significantly improve Situational Awareness (SA) in Persistent Surveillance Systems (PSS). In this paper, salient sound
events associated with group activities are preliminary identified and applied for training a Gaussian Mixture Model
(GMM) whose features are employed as feature vectors for training of algorithms for acoustic sound recognition. In this
paper, discrimination of salient sounds associated with the HVI, HHI, and HOI events is achieved via a Correlation
Based Template Matching (CMTM) classifier. To interlinked salient events representing an ontology-based hypothesis,
a Hidden Markov Model (HMM) is employed to recognize spatiotemporally correlated events. Once such a connection
is established, then, the system generates an annotation of each perceived sound event. This paper discusses the
technical aspects of this approach and presents the experimental results for several outdoor group activities monitored by
an array of acoustic sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications III
Market analysis studies of recent years have shown a steady and significant increase in the usage of RFID technology. Key factors for this growth were the decreased costs of passive RFIDs and their improved performance compared to the other identification technologies. Besides the benefits of RFID technologies into the supply chains, warehousing, traditional inventory and asset management applications, RFID has proven itself worth exploiting on experimental, as well as on commercial level in other sectors, such as healthcare, transport and security. In security sector, airport security is one of the biggest challenges. Airports are extremely busy public places and thus prime targets for terrorism, with aircraft, passengers, crew and airport infrastructure all subject to terrorist attacks. Inside this labyrinth of security challenges, the long range detection capability of the UHF passive RFID technology can be turned into a very important tracking tool that may outperform all the limitations of the barcode tracking inside the current airport security control chain. The Integrated Systems Lab of NCSR Demokritos has developed an RFID based Luggage and Passenger tracking system within the TASS (FP7-SEC-2010-241905) EU research project. This paper describes application scenarios of the system categorized according to the structured nature of the environment, the system architecture and presents evaluation results extracted from measurements with a group of different massive production GEN2 UHF RFID tags that are widely available in the world market.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In remote sensing, accurate identification of concealed far objects is difficult. Here, to detect concealed objects from a
distance, the wideband technology is utilized. As the wideband data includes a broad range of frequencies, it can reveal
information about both the surface of the object and its content. To better detect the object and to improve the accuracy
of target identification, the collected wideband data is processed in the wavelet domain. Information about the target is
spread over different wavelet subbands, and it is possible to better discriminate the target from background for which
their frequency content is placed in the same frequency range. Simulation is done at different frequency ranges and
different powers to identify targets. In conclusion, wavelet-based processing of collected wideband data helps to
appropriately estimate the presence of a target in the scene, and improves the process of target identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A small form factor, low cost radar named rScene® has been designed by McQ Inc. for the unattended detection, classification, tracking, and speed estimation of people and vehicles. This article will describe recent performance enhancements added to rScene® and present results relative to detection range and false alarms. Additionally, a low power (<1W) processing scheme is described that allows the rScene® to be deployed for longer duration, while still detecting desired target scenarios. Using the rScene® to detect other targets of interest like boats over water will also be addressed. Lastly, the lack of performance degradation due to hiding the rScene® in various types of concealed scenarios like behind walls, doors, foliage and camouflage material will be addressed. rScene® provides a variety of options to integrate the device into both wired and wireless communication infrastructures. Based on its sophisticated signal processing algorithms to classify targets and reject clutter, it allows for operation in challenging urban environments in which traditional unattended ground sensor modalities are less effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate detection of a diverse set of targets often requires the use of multiple sensor modalities and algorithms. Fusion approaches can be used to combine information from multiple sensors or detectors. But typical fusion approaches are not suitable when detectors do not operate on all of the same locations of interest, or when detectors are specialized to detect disjoint sets of target types. Run Packing is an algorithm we developed previously to optimally combine detectors when their output never coincides, which can be expected when the detectors are specialized to detect different target types. But when asynchronous detectors sometimes coincide, or specialized detectors sometimes detect the same target, Run Packing ignores this coincidence information and thus may be suboptimal in certain cases. In this paper, we show how multi-detector fusion involving partially coinciding alarms can be re-framed as an equivalent fusion problem that is optimally addressed by Run Packing. This amounts to a hierarchical or hybrid approach involving fusion methods to join coinciding alarms at the same location into a single unified alarm and then using Run Packing to optimally fuse the resulting set of non coinciding alarms. We report preliminary results of applying the method in a few typical landmine detection scenarios to demonstrate its potential utility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications IV
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area.
The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a vision-based target detection scheme based on video saliency to find the rescue targets
automatically. A new definition of video saliency is first proposed to comprehend the target properties based on structure
similarity among adjacent video frames. Then, a cascaded target detection is devised with the local image feature and
regional video saliency. We treat the salient objects in aerial images as a basic feature and filter the false candidates
based on the region-based video saliency. The propose method can improve the search and rescue of targets, and reduce
the economical losses in marine accidents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal Processing, Information Fusion, and Understanding Aspects of Cyber Physical Systems
In this work, we propose a new ground moving target indicator (GMTI) radar based ground
vehicle tracking method which exploits domain knowledge. Multiple state models are considered
and a Monte-Carlo sampling based algorithm is preferred due to the manoeuvring of the ground
vehicle and the non-linearity of the GMTI measurement model. Unlike the commonly used
algorithms such as the interacting multiple model particle filter (IMMPF) and bootstrap multiple
model particle filter (BS-MMPF), we propose a new algorithm integrating the more efficient
auxiliary particle filter (APF) into a Bayesian framework. Moreover, since the movement of
the ground vehicle is likely to be constrained by the road, this information is taken as the
domain knowledge and applied together with the tracking algorithm for improving the tracking
performance. Simulations are presented to show the advantages of both the new algorithm and
incorporation of the road information by evaluating the root mean square error (RMSE).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Postural stability characteristics are considered to be important in maintaining functional independence free of falls and
healthy life style especially for the growing elderly population. This study focuses on developing tools of clinical value
in fall prevention: 1) Implementation of sensors that are minimally obtrusive and reliably record movement data. 2)
Unobtrusively gather data from wearable sensors from four community centers 3) developed and implemented linear and
non-linear signal analysis algorithms to extract clinically relevant information using wearable technology. In all a total of
100 community dwelling elderly individuals (66 non-fallers and 34 fallers) participated in the experiment. All
participants were asked to stand-still in eyes open (EO) and eyes closed (EC) condition on forceplate with one wireless
inertial sensor affixed at sternum level. Participants’ history of falls had been recorded for last 2 years, with emphasis on
frequency and characteristics of falls. Any participant with at least one fall in the prior year were classified as faller and
the others as non-faller. The results indicated several key factors/features of postural characteristics relevant to balance
control and stability during quite stance and, showed good predictive capability of fall risks among older adults.
Wearable technology allowed us to gather data where it matters the most to answer fall related questions, i.e. the
community setting environments. This study opens new prospects of clinical testing using postural variables with a
wearable sensor that may be relevant for assessing fall risks at home and patient environment in near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Senior Collapse Detection (SCD) is a system that uses cyber-physical techniques to create a “smart home” system to predict and detect the falling of senior/geriatric participants in home environments. This software application addresses the needs of millions of senior citizens who live at home by themselves and can find themselves in situations where they have fallen and need assistance. We discuss how SCD uses imagery, depth and audio to fuse and interact in a system that does not require the senior to wear any devices allowing them to be more autonomous. The Microsoft Kinect Sensor is used to collect imagery, depth and audio. We will begin by discussing the physical attributes of the “collapse detection problem”. Next, we will discuss the task of feature extraction resulting in skeleton and joint tracking. Improvements in error detection of joint tracking will be highlighted. Next, we discuss the main module of “fall detection” using our mid-level skeleton features. Attributes including acceleration, position and room environment factor into the SCD fall detection decision. Finally, how a detected fall and the resultant emergency response are handled will be presented. Results in a home environment will be given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The explosive increase of the availability of personal mobile devices has brought about a significant amount of peer-to-peer communication opportunities upon their encountering, which can be exploited to realize distributed message transmission among mobile devices. However, the opportunistic encountering among mobile devices, which is determined by the mobility of their holders, has introduced great difficulties on efficiently transmitting a message to its designated destination. Actually, people usually present a certain pattern on daily mobility. Further, device holders often belong to a certain social network community. Therefore, in this paper, we propose a social- based cyber-physical system for distributed message transmission, namely SocMessaging, by integrating both the mobility pattern and the social network of device holders. When selecting an encountered node for message relay, in addition to the node's historical encountering records with the destination node, SocMessaging also considers its social closeness with the destination node. Then, the message is always transmitted to the node that is most likely to meet its destination. As a result, SocMessaging closely connects the cyber world (i.e., network), physical world (i.e., people) and social network (i.e., social connection). Finally, our experimental results demonstrate the efficiency of the proposed system in message transmission between device holders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Question and Answering (Q/A) systems aggregate the collected intelligence of all users to provide satisfying answers for questions. A well-developed Q/A system should provide high question response rate, low response delay and good answer quality. Previous works use reputation systems to achieve the goals. However, these reputation systems evaluate a user with an overall rating for all questions the user has answered regardless of the question categories, thus the reputation score cannot accurately reflect the user's ability to answer a question in a specific category. In this paper, we propose TtustQ, a category reputation based Q/A System. TtustQ evaluates users' willingness and capability to answer questions in different categories. Considering a user has different willingness to answer questions from different users, TtustQ lets each node evaluate the reputation of other nodes answering its own questions. User a calculates user b's final reputation by considering both user a's direct rating and the indirect ratings on user b from other nodes. The reputation values facilitate forwarding a question to potential answerers, which improves the question response rate, response delay and answer quality. Our trace-driven simulation on PeerSim demonstrates the effectiveness of TtustQ in providing good user experience in terms of response rate and latency, and the answer quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we introduce High Assurance SPIRAL to solve the last mile problem for the synthesis of high assurance implementations of controllers for vehicular systems that are executed in today’s and future embedded and high performance embedded system processors. High Assurance SPIRAL is a scalable methodology to translate a high level specification of a high assurance controller into a highly resource-efficient, platform-adapted, verified control software implementation for a given platform in a language like C or C++. High Assurance SPIRAL proves that the implementation is equivalent to the specification written in the control engineer’s domain language. Our approach scales to problems involving floating-point calculations and provides highly optimized synthesized code. It is possible to estimate the available headroom to enable assurance/performance trade-offs under real-time constraints, and enables the synthesis of multiple implementation variants to make attacks harder. At the core of High Assurance SPIRAL is the Hybrid Control Operator Language (HCOL) that leverages advanced mathematical constructs expressing the controller specification to provide high quality translation capabilities. Combined with a verified/certified compiler, High Assurance SPIRAL provides a comprehensive complete solution to the efficient synthesis of verifiable high assurance controllers. We demonstrate High Assurance SPIRALs capability by co-synthesizing proofs and implementations for attack detection and sensor spoofing algorithms and deploy the code as ROS nodes on the Landshark unmanned ground vehicle and on a Synthetic Car in a real-time simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Denial of Service (DoS) attacks disable network services for legitimate users. A McAfee report shows that eight out of ten Critical Infrastructure Providers (CIPs) surveyed had a significant Distributed DoS (DDoS) attack in 2010.1 Researchers proposed many approaches for detecting these attacks in the past decade. Anomaly based DoS detection is the most common. In this approach, the detector uses statistical features; such as the entropy of incoming packet header fields like source IP addresses or protocol type. It calculates the observed statistical feature and triggers an alarm if an extreme deviation occurs. However, intrusion detection systems (IDS) using entropy based detection can be fooled by spoofing. An attacker can sniff the network to collect header field data of network packets coming from distributed nodes on the Internet and fuses them to calculate the entropy of normal background traffic. Then s/he can spoof attack packets to keep the entropy value in the expected range during the attack. In this study, we present a proof of concept entropy spoofing attack that deceives entropy based detection approaches. Our preliminary results show that spoofing attacks cause significant detection performance degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based
on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis.
This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and
evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at
findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false
assertion) are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fast capture of spread spectrum code is one of the key technologies of Direct Sequence Spread Spectrum (DSSS) communication. There are several traditional methods for the fast capture such as XFAST and AVERAGE. In this paper we propose a new algorithm based on the time domain samples and Binary search according the autocorrelation of the PN code. Firstly, signal's simple rate is reduced to a quarter of the chip rate with the received spread spectrum, and determined with a specific method, then the signal is divided into four parts by the local PN code and accumulated to a new sequence. Finally, the synchronous pseudo-code is captured with the correlation of the two new reference sequences. Experimental results demonstrate that the proposed algorithm significantly improves the efficiency in the capture of long Pseudo-Code code in DSSS, compared with traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parameter estimation is an important component in the field of frequency-hopping communication. In particular, the accuracy and the efficiency of the hopping-cycle estimation is significant for these applications. The traditional time-frequency method, e.g., Short Time Fourier Transform, cannot work well with high resolution of both time and frequency, according to Heisenberg's uncertainty principle. In this paper we propose a novel algorithm which is based on Short Time Fourier Transform (STFT) and Sparse Linear Regression (SLR). Firstly, the signal is preprocessed by STFT and the information of peaks is extracted by a first-order differential method. Secondly, the hopping segment data is processed with the SLR according to the dual sparsity of time-frequency of the hopping signal. Finally, combining the statistical transition moments, an accurate estimate of the jump cycle is achieved. Simulation results demonstrate that the estimation algorithm is more accurate and efficient in low SNR than the traditional STFT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problem of pulse sorting in complex electromagnetic environment, we propose an improved method for pulse sorting through in-depth analysis of the PRI transform algorithm principle and the advantages and disadvantages in this paper. The method is based on the traditional PRI transform algorithm, using spectral analysis of PRI transform spectrum to estimate the PRI centre value of jitter signal. Simulation results indicate that, the improved sorting method overcome the shortcomings of the traditional PRI jitter separation algorithm which cannot effectively sort jitter pulse sequence, in addition to the advantages of simple and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The collection of environmental light pollution data related to sea turtle nesting sites is a laborious and time consuming effort entailing the use of several pieces of measurement equipment, their transportation and calibration, the manual logging of results in the field, and subsequent transfer of the data to a computer for post-collection analysis. Serendipitously, the current generation of mobile smart phones (e.g., iPhone® 5) contains the requisite measurement capability, namely location data in aided GPS coordinates, magnetic compass heading, and elevation at the time an image is taken, image parameter data, and the image itself. The Turtle Habitat Environmental Light Measurement App (THELMA) is a mobile phone app whose graphical user interface (GUI) guides an untrained user through the image acquisition process in order to capture 360° of images with pointing guidance. It subsequently uploads the user-tagged images, all of the associated image parameters, and position, azimuth, elevation metadata to a central internet repository. Provision is also made for the capture of calibration images and the review of images before upload. THELMA allows for inexpensive, highly-efficient, worldwide crowdsourcing of calibratable beachfront lighting/light pollution data collected by untrained volunteers. This data can be later processed, analyzed, and used by scientists conducting sea turtle conservation in order to identify beach locations with hazardous levels of light pollution that may alter sea turtle behavior and necessitate human intervention after hatchling emergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.