PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7345, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have tested a prototype dual-band NVG system consisting of two NVGs fitted with filters that split the NVG
sensitive range into a short (visual) and a long wavelength (NIR) band. The Color-the-night technique (see Hogervorst &
Toet, SPIE D&S '08) was used to fuse the images of the two sensors. We designed a color scheme especially optimized
for the detection of camouflaged targets. The added value of this system was evaluated in an experiment in which
observers had to detect targets (green and blue tubes). Daytime images were taken with a normal camera, and NVG
images were recorded at night. Performance was tested for i) the visual band only, ii) the NIR-band only, iii) normal
NVG, iv) daytime imagery and v) color fused dual-band NVG. The results show that some targets were detected in the
individual bands, but most targets were detected in the dual-band system, with performance comparable to that of the
band optimally presenting that particular target. Our evaluation shows the added value of dual-band over single band
NVG for the detection of targets, and suggests better situational awareness and perceived depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design and first test results of the TRICLOBS (TRI-band Color Low-light OBServation) system The
TRICLOBS is an all-day all-weather surveillance and navigation tool. Its sensor suite consists of two digital image
intensifiers (Photonis ICU's) and an uncooled longwave infrared microbolometer (XenICS Gobi 384). The night vision
sensor suite registers the visual (400-700 nm), the near-infrared (700-1000 nm) and the longwave infrared (8-14 μm)
bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned, using two dichroic beam
splitters: an ITO filter to reflect the LWIR part of the incoming radiation into the thermal camera, and a B43-958 hot
mirror to split the transmitted radiation into a visual and NIR part. The individual images can be monitored through two
LCD displays. The TRICLOBS provides both digital and analog video output. The digital video signals can be
transmitted to an external processing unit through an Ethernet connection. The analog video signals can be digitized and
stored on on-board harddisks. An external processor is deployed to apply a fast lookup-table based color transform (the
Color-the-Night color mapping principle) to represent the TRICLOBS image in natural daylight colors (using
information in the visual and NIR bands) and to maximize the detectability of thermal targets (using the LWIR signal).
The external processor can also be used to enhance the quality of all individual sensor signals, e.g. through noise
reduction and contrast enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real-time fusion of imagery from two or more complementary sensors offers significant operational benefits for both
operator-in-the-loop and automated processing systems. This paper reports on a new image fusion framework that can be
used to maximise detection, recognition and identification performance within the context of low false-alarm rate
operation. The Intelligent Image Fusion (I2F) architecture presented here allows exploitation of data at the information
level as well as at the pixel-level, and can do so in an adaptable and intelligent manner. In this paper the architecture is
examined in terms of design, applicability to a range of tasks, and performance factors such as adaptability, flexibility
and utility. The relationship between algorithm design and hardware implementation, and the consequential impact on
system performance, is also reviewed. Particular consideration is given to size, weight and power constraints that exist
for some systems and their implications for processing optimisation and implementation on different processing
platforms. Results are presented from the outcome of quantitative studies, development programmes and system trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras
deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of
associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time
operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while
maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the
visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking for network surveillance systems has gained significant interest especially in sensitive areas such as homeland
security, battlefield intelligence, and facility surveillance. Most of the current sensor network protocols do not address the need
for multi-sensor fusion-based target tracking schemes, which is crucial for the longevity of the sensor network. In this paper,
we present an efficient fusion model for target tracking in a cluster-based large sensor networks. This new scheme is inspired
by the image processing techniques by perceiving a sensor network as an energy map of sensor stimuli and applying typical
image processing techniques on this map such as: filtering, convolution, clustering, segmentation, etc to achieve high-level
perceptions and understanding of the situation. The new fusion model is called Soft Adaptive Fusion of Sensor Energies
(SAFE). SAFE performs soft fusion of the energies collected by a local region of sensors in a large-scale sensor network. This
local fusion is then transmitted by the head node to a base-station to update the common operation picture with evolving events
of interest. Simulated scenarios showed that SAFE is promising by demonstrating a significant improvement in target tracking
reliability, uncertainty, and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and
obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the
Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance
embedded mission processing architecture for potential application to multiple aviation platforms.
MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems,
surveyed advanced hardware and software technologies, and defined computational infrastructure technical
requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf
(COTS) electronics and software, and, developed experimental mission processors, network architectures, and
software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions.
MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in
Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands
Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP
Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at
Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime surveillance of large volume traffic demands robust and scalable network architectures for distributed
information fusion. Operating in an adverse and unpredictable environment, the ability to flexibly adapt to
dynamic changes in the availability of mobile resources and the services they provide is critical for the success
of the surveillance and rescue missions. We present here an extended and enhanced version of the Dynamic
Resource Configuration $ Management Architecture (DRCMA), with new features and improved algorithms
to better address the adaptability requirements of such a resource network. The DRCMA system concept is
described in abstract functional and operational terms based on the Abstract State Machine (ASM) paradigm
and the CoreSM open source tool environment for modeling dynamic properties of distributed systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The challenges of predictive battlespace awareness and transformation of TCPED to TPPU processes in a netcentric
environment are numerous and complex. One of these challenges is how to post the information with the
right metadata so that it can be effectively discovered and used in an ad hoc manner. We have been working on the
development of a semantic enrichment capability that provides concept and relationship extraction and automatic
metadata tagging of multi-INT sensor data. Specifically, this process maps multi-source data to concepts and
relationships specified within a semantic model (ontology). We are using semantic enrichment for development of
data fusion services to support Army and Air Force programs. This paper presents an example of using the semantic
enrichment architecture for concept and relationship extraction from USMTF data. The process of semantic
enrichment adds semantic metadata tags to the original data enabling advanced correlation and fusion. A geospatial
user interface leverages the semantically-enriched data to provide powerful search, correlation, and fusion
capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situational awareness is a critical issue for the modern battle and security systems improvement of which will increase human performance efficiency. There are multiple research project and development efforts based on omni-directional (fish-eye) electro-optical and other frequency sensor fusion systems implementing head-mounted visualization systems. However, the efficiency of these systems is limited by the human eye-brain system perception limitations. Humans are capable to naturally perceive the situations in front of them, but interpretation of omni-directional visual scenes increases the user's mental workload, increasing human fatigue and disorientation requiring more effort for object recognition. It is especially important to reduce this workload making rear scenes perception intuitive in battlefield situations where a combatant can be attacked from both directions.
This paper describes an experimental model of the system fusion architecture of the Visual Acoustic Seeing (VAS) for representation spatial geometric 3D model in form of 3D volumetric sound. Current research in the area of auralization points to the possibility of identifying sound direction. However, for complete spatial perception it is necessary to identify the direction and the distance to an object by an expression of volumetric sound, we initially assume that the distance can be encoded by the sound frequency. The chain: object features -> sensor -> 3D geometric model-> auralization constitutes Volumetric Acoustic Seeing (VAS).
Paper describes VAS experimental research for representing and perceiving spatial information by means of human hearing cues in more details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-Level fusion systems based on the JDL model are relatively immature. Current solutions lack a comprehensive
ability to manage multi-source data in a multi-dimensional vector space, and generally do not integrate collection to
action models in a cohesive thread. Recombinant Cognition Synthesis (RCS) leverages best-of-breed techniques with a
geospatial, temporal and semantic data model to provide a unified methodology that recombines multi-source data with
analytic and predictive algorithms to synthesize actionable intelligence. This architecture framework enables the
traversal of entity relationships at different level of granularities and the discovery of latent knowledge, thereby
facilitating the domain problem analysis and the development of a Course-of-Action to mitigate adversarial threats.
RCS also includes process refinement techniques to achieve superior information dominance, by incorporating
specialized metadata. This comprehensive and unified methodology delivers enhanced utility to the intelligence analyst,
and addresses key issues of relevancy, timeliness, accuracy, and uncertainty by providing metrics via feedback loops
within the RCS infrastructure that augment the efficiency and effectiveness of the end-to-end fusion processing chain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting a single agency's effectiveness to reduce the consequences of a malicious event is a complex problem. It is
even more complex to predict the overall effectiveness of a group of agencies considering the possible interdependency
of their portfolio of actions. However, this is an essential task in disaster management arena. This work proposes a
method to fuse individual effectiveness provided by subject matter experts, considering the dependency among agencies,
to predict the holistic effectiveness. It can be applied to agency groups that are dependent, partially dependent, or
completely independent. Simulation results illustrate the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FPJE was an experiment to consider the best way to create a system of systems in the realm of Force Protection. It
was sponsored by Physical Security Equipment Action Group (PSEAG) and Joint Program Manager - Guardian (JPMG),
and was managed by the Product Manager - Force Protection Systems (PM-FPS). The experiment attempted to
understand the challenges associated with integrating disparate systems into a cohesive unit, and then the compounding
challenge of handling the flow of data into the system and its dispersion to all subscribed Command and Control (C2)
nodes. To handle this data flow we created the DFE based on the framework of the Joint Battlespace Command and
Control System for Manned and Unmanned Assets (JBC2S).
The DFE is a data server that receives information from the network of systems via the Security Equipment Integration
Working Group (SEIWG) ICD-0100 protocol, processes the data through static fusion algorithms, and then publishes the
fused data to the C2 nodes, in this case JBC2S and the Tactical Automated Security System (TASS). The DFE uses only
known concepts and algorithms for its fusion efforts. This paper discusses the analyzed impact of the fusion on C2
nodes displays and in turn on the operators. Also, this paper discusses the lessons learned about networked control
combined with DFE generated automatic response. Finally, this paper discusses possible future efforts and their benefits
for providing the useful operational picture to the operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The correlation of information from disparate sources has long been an issue in data fusion research. Traditional
data fusion addresses the correlation of information from sources as diverse as single-purpose sensors to all-source
multi-media information. Information system vulnerability information is similar in its diversity of sources and
content, and in the desire to draw a meaningful conclusion, namely, the security posture of the system under
inspection. FuzzyFusionTM, A data fusion model that is being applied to the computer network operations domain is
presented. This model has been successfully prototyped in an applied research environment and represents a next
generation assurance tool for system and network security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raman spectra have fingerprint regions that are highly distinctive and can in principle be used for identification of
explosive residues. However, under most field situations strong illumination by sunlight, impurities in the explosives, or
the presence of a substrate or matrix, cause the Raman spectra to have a strong fluorescence background. Using spectra
of pure explosives, spectra of highly-fluorescent clutter materials including asphalt, cement, sand, soil, and paint chips,
and some spectra of pre-mixed explosive and clutter, we synthesized a library of admixed spectra varying from 5%
explosives and 95% clutter spectra up to 100% explosives and 0% clutter spectra. This represented a signal to noise ratio
for the explosive peaks varying from 0.04 to 5933. Using this library to train a support vector machine, known as a
kernel adatron, we obtained very good identification of the explosive vs. non-explosive. We performed a 40-fold crossvalidation
with leave-100-out for evaluation. Our results show 99.8% correct classification with 0.2% false positives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A team of robots working to explore and map a space may need to share information about landmarks so as register local
maps and to plan effective exploration strategies. In this paper we investigate the use of spatial histograms (spatiograms)
as a common representation for exchanging landmark information between robots. Each robot can use sonar, stereo,
laser and image information to identify potential landmarks. The sonar, laser and stereo information provide the spatial
dimension of the spatiogram in a landmark-centered coordinate frame while video provides the image information. We
call the result a terrain spatiogram. This representation can be shared between robots in a team to recognize landmarks
and to fuse observations from multiple sensors or multiple platforms. We report experimental results sharing indoor and
outdoor landmark information between a two different models of robot equipped with differently configured
stereocameras and show that the terrain spatiogram (1) allows the robots to recognize landmarks seen only by the other
with high confidence and, (2) allows multiple views of a landmark to be fused in a useful fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing.
One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security,
battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3
requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to:
computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g.,
centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and
global data/information fusion scheme for situational awareness. Although, many models have been proposed to address
one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information
fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks.
In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for
LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared
to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different
levels of fusion and different applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FPJE was an experiment to consider the best way to develop and evaluate a system of systems approach to Force
Protection. It was sponsored by Physical Security Equipment Action Group (PSEAG) and Joint Program Manager -
Guardian (JPM-G), and was managed by the Product Manager - Force Protection Systems (PM-FPS). The experiment
was an effort to utilize existing technical solutions from all branches of the military in order to provide more efficient
and effective force protection. The FPJE consisted of four separate Integration Assessments (IA), which were intended
as opportunities to assess the status of integration, automation and fusion efforts, and the effectiveness of the current
configuration and "system" components. The underlying goal of the FPJE was to increase integration, automation, and
fusion of the many different sensors and their data to provide enhanced situational awareness and a common operational
picture.
One such sensor system is the Battlefield Anti-Intrusion System (BAIS), which is a system of seismic and acoustic
unmanned ground sensors. These sensors were originally designed for employment by infantry soldiers at the platoon
level to provide early warning of personnel and vehicle intrusion in austere environments. However, when employed
around airfields and high traffic areas, the sensitivity of these sensors can cause an excessive number of detections.
During the second FPJE-IA all of the BAIS detections and the locations of all Opposing Forces were logged and
analyzed to determine the accuracy rate of the sensors. This analysis revealed that with minimal filtering of detections,
the number of false positives and false negatives could be reduced substantially to manageable levels while using the
sensors within extreme operational acoustic and seismic noise conditions that are beyond the design requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g.,
robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs.
This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which
translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download
of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that
then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects
in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single
closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or
waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the
appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial
representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial
locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently
translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt-
zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual
objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal
cues that include both visual and auditory inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The benefits of image fusion for man-in-the-loop Detection, Recognition, and Identification (DRI) tasks are well known.
However, the performance of conventional image fusion systems is typically sub-optimal, as they fail to capitalise on
high-level information which can be abstracted from the imagery. As part of a larger study into an Intelligent Image
Fusion (I2F) framework, this paper presents a novel approach which exploits high-level cues to adaptively enhance the
fused image via feedback to the pixel-level processing. Two scenarios are chosen for illustrative application of the
approach, Situational Awareness and Anomalous Object Detection (AOD). In the Situational Awareness scenario,
motion and other cues are used to enhance areas of the image according to predefined tasks, such as the detection of
moving targets of a certain size. This yields a large increase in Local Signal-to-Clutter Ratio (LSCR) when compared to
a baseline, non-adaptive approach. In the AOD scenario, spatial and spectral information is used to direct a foveal-patch
image fusion algorithm. This demonstrates a significant increase in the Probability of Detection on test imagery whilst
simultaneously reducing the mean number of false alarms when compared to a baseline, non-foveal approach. This paper
presents the rationale for the I2F approach and details two specific examples of how it can be applied to address very
different applications. Design details and quantitative performance analysis results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Variation in the number of targets and sensors needs to be addressed in any realistic sensor system. Targets may
come in or out of a region or may suddenly stop emitting detectable signal. Sensors can be subject to failure for
many reasons. We derive a tracking algorithm with a model that includes these variations using Random Finite
Set Theory (RFST). RFST is a generalization of standard probability theory into the finite set theory domain.
This generalization does come with additional mathematical complexity. However, many of the manipulations
in RSFT are similar in behavior and intuition to those of standard probability theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to effectively evaluate information fusion systems or emerging technologies, it is critical to quickly, efficient,
and accurately collect functional and observational data about such systems. One of the best ways to test a system's
capabilities is to have an end user operate it in controlled but realistic field-based situations. Evaluation data of the
systems' performance as well as observational data of the user's interactions can then be collected and analyzed. This
analysis often gives insight into how the system may perform in the intended environment and of any potential areas for
improvement. One common method for collection of this data involves an evaluator/observer generating hand-written
notes, comments, and sketches. This often proves to be inefficient in complex sensor technology field-based evaluation
environments. Personnel at the National Institute of Standards and Technology (NIST) have been tasked with collecting
such evaluation data for emerging soldier-worn sensor systems. Lessons learned from the on-going development of
efficient field-based evaluation data collection techniques will be discussed. The most recent evaluation data collection
using a personal digital assistant (PDA)-style system and details of its use during an evaluation of a multi-team study
will also be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which
is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is
conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the
fusion process solely on local regions that are task relevant with a high probability. The resulting local models
correspond then to restricted versions of the original one. In a previous publication, we used bounds for the
probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior
information which we perform to build local models. In this paper, we prove the validity of this proceeding using
information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed
manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process.
For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is
a natural analogy between the resulting agent based architecture and criminal investigations in real life. We
show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally.
Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of
reconnaissance, which highlights its high potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory
data. More specifically, it describes a computational method and system for how to process and remember multiple
locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and
memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of
real representations used in the human brain. The novelty of the work is in the computationally efficient and robust
spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage
and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient
method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert
between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for
training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of
auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a
hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented
action(s). This work is most useful for any machine or human-machine application that requires processing of
multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming
from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial
understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual,
auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot
degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The
most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with
arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and
present preliminary results on a real-robotic platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method of sampled data compression and reconstruction using the theory of distributed
compressed sensing for wireless sensor network, in which the correlation between the sensors is considered for joint
sparsity representation, compression and reconstruction of the signals. And incoherent random projection CS matrix in
each sensor is as encoding matrix to generate compressed measurements for storing, delivering and processing. The
reconstruction algorithm with both acceptable complexity and precision is developed for noise corrupted measurements
by fully utilizing of correlations diversity. The simulation shows that the number of measurements only slightly larger
than the sparsity of the sampled sensor data is needed for successful recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.