PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8407, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of
independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained
(from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from
the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR
data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects.
Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a
classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each
other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The
first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level
consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an
enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne
data collection of LIDAR and HSI over the Gulfport, MS, area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intensity interferometery (II) holds tremendous potential for remote sensing of space objects. We investigate the
properties of a hybrid intensity interferometer concept where information from an II is fused with information from a
traditional imaging telescope. Although not an imager, hybrid intensity interferometery measurements can be used to
reconstruct an image. In previous work we investigated the effects of poor SNR on this image formation process. In this
work, we go beyond the obviously deleterious effects of SNR, to investigate reconstructed image quality as a function of
the chosen support constraint, and the resultant image quality issues. The benefits to fusion of assumed perfect-yet-partial
a priori information with traditional intensity interferometery measurements are explored and shown to result in
increased sensitivity and improved reconstructed-image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of multi-source object classication in a context where objects of interest are
part of a known taxonomy and the classication sources report at varying levels of specicity. This problem
must consider several technical challenges: a) support fusion of heterogeneous classication inputs, b) provide a
computationally scalable approach that accommodates taxonomy's with thousands of leaf nodes, and c) provide
outputs that support tactical decision aides and are suitable inputs for subsequent fusion processes. This paper
presents an approach that employs the Transferable Belief Model, Pignistic Transforms, and Bayesian Fusion to
address these challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based
Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of
multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time
evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor
updates use a measurement-based sampling method with a Delaunay triangulation. We present an application
of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Layered sensing systems involve operation of several layers of sensing with different capabilities integrated into one
whole system. The integrated layers of sensing must share information and local decisions across layers for better
situation awareness. This research focused on the development of a model for decision making and fusion at the
information level in layered sensing systems using the cloud model for uncertainty processing. In this research, the
addition of a new processing level to the Joint Directors of Laboratories (JDL) processing model is proposed. The new
processing level is called "Information Assessment, Fusion, and Control (IAFC)". Through this level, the different
layers of a layered sensing system evaluate information about a given situation in terms of threat level and make a
decision. The information assessment and control processing module were able to assess the threat level of a situation
accurately and exchange assessments in order to determine the overall situation's threat level among all layers. The
uncertain decisions were fused together to a unified decision using the cloud model of uncertainty processing
methodology. Using this methodology, a cognitive element was added to the process of information assessment module
leading to more accurate situation awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Provenance is the information about the origin of the data inputs and the data manipulations to a obtain a
final result. With the huge amount of information input and potential processing available in sensor networks,
provenance is crucial for understanding the creation, manipulation and quality of data and processes. Thus
maintaining provenance in a sensor network has substantial advantages. In our paper, we will concentrate on
showing how provenance improves the outcome of a multi-modal sensor network with fusion. To make the ideas
more concrete and to show what maintaining provenance provides, we will use a sensor network composed of
binary proximity sensors and cameras to monitor intrusions as an example. Provenance provides improvements
in many aspects such as sensing energy consumption, network lifetime, result accuracy, node failure rate. We
will illustrate the improvements in accuracy of the position of the intruder in a target localization network by
simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and
robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that
decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at
low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is
required at the final stage of the process to provide accurate assessment about the health of the monitored system
as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated
health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies
is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented
the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial
intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their
performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian,
Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion.
Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested
on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling
methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model,
both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic
decisions are generated with an unbiased process and varying interaction complexities among decisions to
provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes,
implementation results of the developed fusion algorithms on structural health monitoring data collected from
experimental tests are reported in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of distributed sensor information fusion by multiple autonomous robots within the
context of landmine detection. We assume that different landmines can be composed of different types of material
and robots are equipped with different types of sensors, while each robot has only one type of landmine detection
sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism
called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot
and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief'
representing the confidence of the agent in identifying the object as a landmine. The beliefs from different
robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker
agent uses this aggregate belief information about a potential landmine and makes decisions about which other
robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our
experimental results show that, for identical data distributions and settings, using our prediction market-based
information aggregation technique increases the accuracy of object classification favorably as compared to two
other commonly used techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel approach for the sharing of knowledge between widely heterogeneous robotic agents is presented,
drawing upon Gardenfors Conceptual Spaces approach [4]. The target microrobotic platforms considered are
computationally, power, sensor, and communications impoverished compared to more traditional robotics
platforms due to their small size. This produces novel challenges for the system to converge on an
interpretation of events within the world, in this case specifically focusing on the task of recognizing the
concept of a biohazard in an indoor setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the scenario where an autonomous platform that is searching or traversing a building may observe unstable
masonry or may need to travel over unstable rubble. A purely behaviour-based system may handle these challenges but
produce behaviour that works against long-terms goals such as reaching a victim as quickly as possible. We extend our
work on ADAPT, a cognitive robotics architecture that incorporates 3D simulation and image fusion, to allow the robot
to predict the behaviour of physical phenomena, such as falling masonry, and take actions consonant with long-term
goals. We experimentally evaluate a cognitive only and reactive only approach to traversing a building filled with
various numbers of challenges and compare their performance. The reactive only approach succeeds only 38% of the
time, while the cognitive only approach succeeds 100% of the time. While the cognitive only approach produces very
impressive behaviour, our results indicate how much better the combination of cognitive and behaviour-based can be.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation
architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information)
cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in
contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic
localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from
perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues
suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze
configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach
the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal
with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited,
and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired
approach to more traditional robotic approaches and discuss current work in progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment,
including people, and uses the model to process perceptual information and to plan its movements. This paper describes
the structure of this architecture.
The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library
for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The
RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This
Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the
degree of detail required for the task.
As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference
component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and
notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction
unexpectedly.
Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's
actions. We report experimental results in indoor environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom
(DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and
vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility
conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by
pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for
robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement
unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands
from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The
BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on
dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static
and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and
group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot
behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Approaches and Algorithms III (Biometrics-related)
Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric
security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure
the biometric identification and verification system. We present a novel technique for robust cancelable template
generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of
different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold
random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of
the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features
that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting
in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence
improving the performance of the system. We have tested the system for classifier fusion for different feature subset and
different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the
biometric system compared with the original template.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gait recognition has recently become a popular topic in the field of biometrics. However, the main hurdle is the
insufficient recognition rate in the presence of low quality samples. The main focus of this paper is to investigate
how the performance of a gait recognition system can be improved using additional information about behavioral
patterns of users and the context in which samples have been taken. The obtained results show combining the
context information with biometric data improves the performance of the system at a very low cost. The amount of
improvement depends on the distinctiveness of the behavioral patterns and the quality of the gait samples. Using the
appropriate distinctive behavioral models it is possible to achieve a 100% recognition rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel application of Evidential Reasoning to Threat Assessment for critical infrastructure
protection. A fusion algorithm based on the PCR5 Dezert-Smarandache fusion rule is proposed which fuses alerts
generated by a vision-based behaviour analysis algorithm and a-priori watch-list intelligence data. The fusion algorithm
produces a prioritised event list according to a user-defined set of event-type severity or priority weightings. Results
generated from application of the algorithm to real data and Behaviour Analysis alerts captured at London's Heathrow
Airport under the EU FP7 SAMURAI programme are presented. A web-based demonstrator system is also described
which implements the fusion process in real-time. It is shown that this system significantly reduces the data deluge
problem, and directs the user's attention to the most pertinent alerts, enhancing their Situational Awareness (SA). The
end-user is also able to alter the perceived importance of different event types in real-time, allowing the system to adapt
rapidly to changes in priorities as the situation evolves. One of the key challenges associated with fusing information
deriving from intelligence data is the issue of Data Incest. Techniques for handling Data Incest within Evidential
Reasoning frameworks are proposed, and comparisons are drawn with respect to Data Incest management techniques
that are commonly employed within Bayesian fusion frameworks (e.g. Covariance Intersection). The challenges associated with simultaneously dealing with conflicting information and Data Incest in Evidential Reasoning frameworks are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the early phase of the Pleiades program, the CNES (the French Space Agency) specified and developed a fully
automatic mosaicing processing unit, in order to generate satellite image mosaics under operational conditions. This tool
can automatically put each input image in a common geometry, homogenize the radiometry, and generate orthomosaics
using stitching lines.
As the image quality commissioning phase of Pleiades1A is on-going, this mosaicing process is being tested for the first
time under operational conditions. The French newly launched high resolution satellite can acquire adjacent images for
French Civil and Defense User Ground Segments. This paper presents the very firsts results of mosaicing Pleiades1A
images.
Beyond Pleiades' use, our mosaicing tool can process a significant variety of images, including other satellites and
airborne acquisitions, using automatically-taken or external ground control points, offering time-based image
superposition, and more. This paper also presents the design of the mosaicing tool and describes the processing
workflow and the additional capabilities and applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel multispectral video system that continuously optimizes both its spectral range channels and the
exposure time of each channel autonomously, under dynamic scenes, varying from short range-clear
scene to long range-poor visibility, is currently being developed. Transparency and contrast of high
scattering medium of channels with spectral ranges in the near infrared is superior to the visible
channels, particularly to the blue range. Longer wavelength spectral ranges that induce higher contrast
are therefore favored. Images of 3 spectral channels are fused and displayed for (pseudo) color
visualization, as an integrated high contrast video stream.
In addition to the dynamic optimization of the spectral channels, optimal real-time exposure time is
adjusted simultaneously and autonomously for each channel. A criterion of maximum average signal,
derived dynamically from previous frames of the video stream is used (Patent Application -
International Publication Number: WO2009/093110 A2, 30.07.2009). This configuration enables
dynamic compatibility with the optimal exposure time of a dynamically changing scene. It also
maximizes the signal to noise ratio and compensates each channel for the specified value of daylight
reflections and sensors response for each spectral range.
A possible implementation is a color video camera based on 4 synchronized, highly responsive, CCD
imaging detectors, attached to a 4CCD dichroic prism and combined with a common, color corrected,
lens. Principal Components Analysis (PCA) technique is then applied for real time "dimensional
collapse" in color space, in order to select and fuse, for clear color visualization, the 3 most significant
principal channels out of at least 4 characterized by high contrast and rich details in the image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design and evaluation of a new demonstrator rifle sight viewing system containing direct view, red aim
point and fusion of an (uncooled, LWIR) thermal sensor with a digital image intensifier. Our goal is to create a system
that performs well under a wide variety of (weather) conditions during daytime and nighttime and combines the
advantages of the various sensor systems. A real-time colour image with salient hot targets is obtained from the night
vision sensors by implementing the Colour-the-Night fusion method (Hogervorst & Toet, 2010) on the on-board
processor. The prototype system was evaluated in a series of field trials with military observers performing detection and
identification tasks. The tests showed that during daytime the addition of a thermal image to direct vision is
advantageous, e.g. for the detection of hot targets. At nighttime, the fusion of thermal and image intensified imagery
results in increased situational awareness and improved detection of (hot) targets. For identification of small (handheld)
objects, the technology needs to be further refined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Approaches and Algorithms IV (Human-in-the-loop)
Information fusion is becoming increasingly human-centric. While past systems typically relegated humans to the role of
analyzing a finished fusion product, current systems are exploring the role of humans as integral elements in a modular
and extensible distributed framework where many tasks can be accomplished by either human or machine performers.
For example, "participatory sensing" campaigns give humans the role of "soft sensors" by uploading their direct
observations or as "soft sensor platforms" by using mobile devices to record human-annotated, GPS-encoded high
quality photographs, video, or audio. Additionally, the role of "human-in-the-loop", in which individuals or teams using
advanced human computer interface (HCI) tools such as stereoscopic 3D visualization, haptic interfaces, or aural
"sonification" interfaces can help to effectively engage the innate human capability to perform pattern matching,
anomaly identification, and semantic-based contextual reasoning to interpret an evolving situation.
The Pennsylvania State University is participating in a Multi-disciplinary University Research Initiative (MURI)
program funded by the U.S. Army Research Office to investigate fusion of hard and soft data in counterinsurgency
(COIN) situations. In addition to the importance of this research for Intelligence Preparation of the Battlefield (IPB),
many of the same challenges and techniques apply to health and medical informatics, crisis management, crowd-sourced
"citizen science", and monitoring environmental concerns. One of the key challenges that we have encountered is the
development of data formats, protocols, and methodologies to establish an information architecture and framework for
the effective capture, representation, transmission, and storage of the vastly heterogeneous data and accompanying
metadata -- including capabilities and characteristics of human observers, uncertainty of human observations, "soft"
contextual data, and information pedigree. This paper describes our findings and offers insights into the role of data
representation in hard/soft fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Utilization of human participants as "soft sensors" is becoming increasingly important for gathering information related
to a wide range of phenomena including natural and man-made disasters, environmental changes over time, crime
prevention, and other roles of the "citizen scientist." The ubiquity of advanced mobile devices is facilitating the role of
humans as "hybrid sensor platforms", allowing them to gather data (e.g. video, still images, GPS coordinates), annotate
it based on their intuitive human understanding, and upload it using existing infrastructure and social networks.
However, this new paradigm presents many challenges related to source characterization, effective tasking, and
utilization of massive quantities of physical sensor, human-based, and hybrid hard/soft data in a manner that facilitates
decision making instead of simply amplifying information overload.
In the Joint Directors of Laboratories (JDL) data fusion process model, "level 4" fusion is a meta-process that attempts
to improve performance of the entire fusion system through effective source utilization. While there are well-defined
approaches for tasking and categorizing physical sensors, these methods fall short when attempting to effectively utilize
a hybrid group of physical sensors and human observers. While physical sensor characterization can rely on statistical
models of performance (e.g. accuracy, reliability, specificity, etc.) under given conditions, "soft" sensors add the
additional challenges of characterizing human performance, tasking without inducing bias, and effectively balancing
strengths and weaknesses of both human and physical sensors. This paper addresses the challenges of the evolving
human-centric fusion paradigm and presents cognitive, perceptual, and other human factors that help to understand,
categorize, and augment the roles and capabilities of humans as observers in hybrid systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from
tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events
may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual
listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits
the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be
valuable in environments that demand high levels of situational awareness based on multiple sources of incoming
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most felt issues in the defence domain is that of having huge quantities of data stored in databases and
acquired from field sensors, without being able to infer information from them. Usually databases are continuously
updated with observations, and are related to heterogeneous data. Deep and continuous analysis on data could mine
useful correlations, explain relations existing among data and cue searches for further evidences.
The solution to the problem addressed before seems to deal both with the domain of Data Mining and with the domain of
high level Data Fusion, that is Situation Assessment, Threat Assessment and Process Refinement, also synthesised as
Situation Awareness.
The focus of this paper is the definition of an architecture for a system adopting data mining techniques to adaptively
discover clusters of information and relation among them, to classify observations acquired and to use the model of
knowledge and the classification derived in order to assess situations, threats and refine the search for evidences.
Sources of information taken into account are those related to the intelligence domain, as IMINT, HUMINT, ELINT,
COMINT and other non-conventional sources. The algorithms applied refer to not supervised and supervised
classification for rule exploitation, and adaptively built Hidden Markov Model for situation and threat assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysts are faced with mountains of data, and finding that relevant piece of information is the proverbial needle in a
haystack, only with dozens of haystacks. Analysis tools that facilitate identifying causal relationships across multiple
data sets are sorely needed. 21st Century Systems, Inc. (21CSi) has initiated research called Causal-View, a causal datamining
visualization tool, to address this challenge. Causal-View is built on an agent-enabled framework. Much of the
processing that Causal-View will do is in the background. When a user requests information, Data Extraction Agents
launch to gather information. This initial search is a raw, Monte Carlo type search designed to gather everything
available that may have relevance to an individual, location, associations, and more. This data is then processed by Data-
Mining Agents. The Data-Mining Agents are driven by user supplied feature parameters. If the analyst is looking to see
if the individual frequents a known haven for insurgents he may request information on his last known locations. Or, if
the analyst is trying to see if there is a pattern in the individual's contacts, the mining agent can be instructed with the
type and relevance of the information fields to look at. The same data is extracted from the database, but the Data
Mining Agents customize the feature set to determine causal relationships the user is interested in. At this point, a
Hypothesis Generation and Data Reasoning Agents take over to form conditional hypotheses about the data and pare the
data, respectively. The newly formed information is then published to the agent communication backbone of Causal-
View to be displayed. Causal-View provides causal analysis tools to fill the gaps in the causal chain. We present here the
Causal-View concept, the initial research into data mining tools that assist in forming the causal relationships, and our
initial findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Systems and Evaluation Measures
Information assurance is a critical component of any organization's data network. Trustworthiness of the sensor data,
especially in the case of wireless sensor networks (WSNs), is an important metric for any application that requires
situational awareness. In a WSN, information packets are typically not encrypted and the nodes themselves could be
located in the open, leaving them susceptible to tampering and physical degradation. In order to develop a method to
assess trustworthiness in WSNs, we have utilized statistical trustworthiness metrics and have implemented an agentbased
simulation platform that can perform various trustworthiness measurement experiments for various WSN
operating scenarios. Different trust metrics are used against multiple vulnerabilities to detect anomalous behavior and
node failure as well as malicious attacks. The simulation platform simulates WSNs with various topologies, routing
algorithms, battery and power consumption models, and various types of attacks and defense mechanisms. Additionally,
we adopt information entropy based techniques to detect anomalous behavior. Finally, detection techniques are fused to
provide various metrics, and various trustworthiness metrics are fused to provide aggregate trustworthiness for the
purpose of situational awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor data fusion is and has been a topic of considerable research, but rigorous and quantitative understanding of the
benefits of fusing specific types of sensor data remains elusive. Often, sensor fusion is performed on an ad hoc basis
with the assumption that overall detection capabilities will improve, only to discover later, after expensive and time
consuming laboratory and/or field testing that little advantage was gained.
The work presented here will discuss these issues with theoretical and practical considerations in the context of fusing
chemical sensors with binary outputs. Results are given for the potential performance gains one could expect with such
systems, as well as the practical difficulties involved in implementing an optimal Bayesian fusion strategy with realistic
scenarios. Finally, a discussion of the biases that inaccurate statistical estimates introduce into the results and their
consequences is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive image fusion system based on neural network principle was realized. It works with digitalized video
sequences of visible and infrared band sensors, and is able to produce the optimal fused image for a wide range
of lighting conditions through an adaptive change of a fusion algorithm. The change is driven by a change in
the measured statistic of the input images. The best algorithm for a particular input is found with the help of
an objective measurement of the fusion process quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.