PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The Data and Information Fusion domains have for some time addressed the issues
involved with Situation Estimation and Situation Refinement as part of the
characterization of the "higher" levels of fusion processing, meaning those levels of
processing that deal with more abstract and complex world states of interest that people
call "situations". It is usually agreed however that at the moment at least the research in
the Data and Information Fusion (DIF) field has by far been on the aspects of estimating
single and sometimes multiple-object attributes from composite observational data, and
usually from electronic or physics-based sensing devices such as radars and imaging
systems, that is, on the so-called "lower" levels of fusion. As both the world and the
technology have changed, and as research in the DIF arena has matured, there has been a
considerable interest in directing the research to methods for estimating the higher state
levels of DIF, usually called Situation Refinement and Threat or Impact Refinement, and
related to "Level 2" and "Level 3" of the well-known "JDL" DIF process Model (Ref 1).
Note that the "refinement" term is important, implying an awareness of the fact that the
focus of DIF processing is almost always on dynamic events in the world; it also reflects
the need for a temporally-adaptive, recursive state estimation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is
detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems
provide us with huge amounts of information over large geographical areas which can make the operators reach
their cognitive capacity and start to miss important events. We propose and agent-based situation management system
that automatically analyse sensor information to detect unusual activity and anomalies. The system combines
knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both
radar and AIS sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situational awareness is critical on the modern battlefield. A large amount of intelligence information is collected to
improve decision-making processes, but in many cases this huge information load is even decelerating analysis and
decision-making because of the lack of reasonable tools and methods to process information. To improve the decision
making process and situational awareness, lots of research is done to analyze and visualize intelligence information data
automatically. Different data fusion and mining techniques are applied to produce an understandable situational picture.
Automated processes are based on a data model which is used in information exchange between war operators. The data
model requires formal message structures which makes information processing simpler in many cases. In this paper,
generated formal intelligence message data is visualized and analyzed by using the self-organizing map (SOM). The
SOM is a widely used neural network model, and it has shown its effectiveness in representing multi-dimensional data in
two or three dimensional space. The results show that multidimensional intelligence data can be visualized and classified
with this technique. The SOM can be used for monitoring intelligence message data (e.g. in purpose of error hunting),
message classification and hunting correlations. Thus with the SOM it is possible to speed up the intelligence process
and make better and faster decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of tracking and identifying developing situations is an ability of importance within the surveillance domain.
We refer to this as situation recognition and believe that it can enhance situation awareness for decision makers.
Situation recognition requires that many subproblems are solved. For instance, we need to establish which situations are
interesting, how to represent these situations, and which inferable events and states that can be used for representing
them. We also need to know how to track and identify situations and how to determine the correlation between present
information about situations with knowledge. For some of these subproblems, data-driven approaches are suitable, whilst
knowledge-driven approaches are more suitable for others. In this paper we discuss our current research efforts and goals
concerning template-based situation recognition. We provide a categorization of approaches for situation recognition
together with a formalization of the template-based situation recognition problem. We also discuss this formalization in
the light of a pick-pocket scenario. Finally, we discuss future directions for our research on situation recognition. We
conclude that situation recognition is an important problem to look into for enhancing the overall situation awareness of
decision makers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on information fusion and situation management within the military domain, is often focused on data-driven
approaches for aiding decision makers in achieving situation awareness. We have in a companion paper identified
situation recognition as an important topic for further studies on knowledge-driven approaches. When developing new
algorithms it is of utmost importance to have data for studying the problem at hand (as well as for evaluation purposes).
This often become a problem within the military domain as there is a high level of secrecy, resulting in a lack of data,
and instead one often needs to resort to artificial data. Many tools and simulation environments can be used for
constructing scenarios in virtual worlds. Most of these are however data-centered, that is, their purpose is to simulate the
real-world as accurately as possible, in contrast to simulating complex scenarios. In high-level information fusion we can
however often assume that lower-level problems have already been solved - thus the separation of abstraction - and we
should instead focus on solving problems concerning complex relationships, i.e. situations and threats. In this paper we
discuss requirements that research on situation recognition puts on simulation tools. Based on these requirements we
present a component-based simulator for quickly adapting the simulation environment to the needs of the research
problem at hand. This is achieved by defining new components that define behaviors of entities in the simulated world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless Sensor Networks (WSNs) are systems that may contain hundreds to thousands of low-power and low-cost
sensor nodes. The potential applicability of such systems is enormous; security surveillance and intrusion detection
systems for smart buildings and military bases, monitoring chemical plants for safety, wireless body area networks for
first responders, and monitoring habitats and natural environments for scientific and other purposes, among others. As
sensor network technology matures, we expect to witness an increasing number of such systems deployed in the real
world. This renders sensor networks more accessible to a wide variety of possible attacks and functional faults, as they
are deployed in remote, un-trusted, hostile environments. While different basic cryptographic building blocks and
hardened hardware architectures for most sensor network platforms are currently available and allow for protection on a
single sensor node basis, such building blocks are not effective in preventing wider scale attacks once a node has been
compromised. To this end, UtopiaCompression is proposing Proactive Trust Management System (PTMS) for WSNs.
Our solution is based on an easily extensible framework, tailored to deal with the resource constrained WSNs, and uses a
combination of novel outlier detection mechanisms and trust management algorithms to effectively cope with common
sensor faults and network attack models. Moreover, our solution is based on distributed in-network processing, which
significantly improves scalability and extends life time of the system. This paper also discusses the implementation and
evaluation of our solution on Sun SPOT sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work is based on the interactions among the nodes of a wireless sensor network (WSN) to cooperatively process
data from multiple sensors. Quality-of-service (QoS) metrics are associated with the quality of fused information:
throughput, delay, packet error rate, etc. A multivariate point process (MVPP) model of discrete random events in
WSNs establishes stochastic characteristics of optimal cross-layer protocols. In previous work by the author, discreteevent,
cross-layer interactions in the MANET protocol are modeled in very general analytical terms with a set of
concatenated design parameters and associated resource levels by multivariate point processes (MVPPs).
Characterization of the "best" cross-layer designs for the MANET is formulated by applying the general theory of
martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters
and controlled through conditional rates of the MVPPs. Assumptions on WSN characteristics simplify the dynamic
programming conditions to yield mathematically tractable descriptions for the optimal routing protocols. Modeling
limitations on the determination of closed-form solutions versus iterative explicit solutions for ad hoc WSN controls are
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The applicability of a sparse sensor network with only two sensor nodes and a small number of directional LIDAR
sensors to detect and track humans in an area of surveillance is investigated. The detection and tracking performances
are evaluated for various positions of the two nodes as a function of the number of sensors per node and the sensor
beamwidths. A quality factor incorporating the area coverage ratio and the position error is introduced to find the best
network configuration with a minimal number of sensors yielding a position accuracy sufficient for the task at hand.
Extensive simulations and measurements with two laserscanners to emulate the LIDAR sensors were carried out for
straight trajectories uniformly distributed over the area of surveillance. In order to improve the tracking performance, we
used a Kalman filter based approach. As in our application a spatial mean RMS position error of approx. 0.6 m is
sufficient, each of the two sensor nodes must be equipped with 4 LIDAR sensors with a -3dB-beamwidth of 12°.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation
detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor
network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays
of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving
vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor
network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from
different radiation detectors are combined to improve the detection confidence. In addition, the DSN exploits other
sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class
(e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node.
Relevant information from each node is sent to a base station computer which is used to assess the movement of
radioactive materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Threat evaluation is the process in which threat values are assigned to detected targets, based upon the inferred
capabilities and intents of the targets to inflict damage to blue force defended assets. This is a high-level
information fusion process of high importance, since the calculated threat values are used as input when blue
force weapon systems are allocated to the incoming targets, a process often referred to as weapon allocation.
Threat values can be calculated from a number of different parameters, such as the position of the closest point of
approach (CPA) with respect to blue force defended assets, time required to reach the CPA, the target's velocity,
and its type. A number of algorithms for calculating threat values have been suggested throughout literature,
however, criteria to evaluate the performance of such algorithms seem to be lacking. In this paper, we discuss
different ways to assess the performance of threat evaluation algorithms. In specific, we describe an implemented
testbed in which threat evaluation algorithms can be compared to each other, based on a survivability criterion.
Survivability is measured by running the threat evaluation algorithms on simulated scenarios and using the
resulting threat values as input to a weapon allocation module. Depending on how well the threat evaluation is
performed, the ability of the blue force weapon systems to eliminate the incoming targets will vary (and thereby
also the survivability of the defended assets). Our obtained results for two different threat evaluation algorithms
are presented and analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extensive discussions have taken place in recent year regarding impact assessment - what is it and how can we do it? It
is especially intriguing in this modern era where non-traditional warfare has caused either information overload or
limited understanding of adversary doctrines. This work provides a methodical discussion of key elements for the broad
definition of impact assessment (IA). The discussion will start with a process flow involving components related to IA.
Two key functional components, impact estimation and threat projection, are compared and illustrated in detail. These
details include a discussion of when to model red and blue knowledge. Algorithmic approaches will be discussed,
augmented with lessons learned from our IA development for cyber situation awareness. This paper aims at providing
the community with a systematic understanding of IA and its open issues with specific examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The terrorist attack of 9/11 has revealed how vulnerable the civil aviation industry is from both
security and safety points of view. Dealing with several aircrafts cruising in the sky of a specific region
requires decision makers to have an automated system that can raise their situational awareness of how much
a threat an aircraft presents. In this research, an in-flight array of sensors has been deployed in a simulated
aircraft to extract knowledge-base information of how passengers and equipment behave in normal flighttime
which has been used to train artificial neural networks to provide real-time streams of normal
behaviours. Finally, a cascading of fuzzy logic networks is designed to measure the deviation of real-time
data from the predicted ones. The results suggest that Neural-Fuzzy networks have a promising future to
raise the awareness of decision makers about certain aviation situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planning a mission to monitor, control or prevent activity requires postulation of subject behaviours, specification of
goals, and the identification of suitable effects, candidate methods, information requirements, and effective
infrastructure. In an operation that comprises many missions, it is desirable to base decisions to assign assets and
computation time or communications bandwidth on the value of the result of doing so in a particular mission to the
operation. We describe initial investigations of a holistic approach for judging the value of candidate sensing service
designs by stochastic modeling of information delivery, knowledge building, synthesis of situational awareness, and the
selection of actions and achievement of goals. Abstraction of physical and information transformations to interdependent
stochastic state transition models enables calculation of probability distributions over uncertain futures using wellcharacterized
approximations. This complements traditional Monte Carlo war gaming in which example futures are
explored individually, by capturing probability distributions over loci of behaviours that show the importance and value
of mission component designs. The overall model is driven by sensing processes that are constructed by abstracting from
the physics of sensing to a stochastic model of the system's trajectories through sensing modes. This is formulated by
analysing probabilistic projections of subject behaviours against functions which describe the quality of information
delivered by the sensing service. This enables energy consumption predictions, and when composed into a mission
model, supports calculation of situational awareness formulation and command satisfaction timing probabilities. These
outcome probabilities then support calculation of relative utility and value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a bio-inspired VISion based actionable INTelligence system (VISINT) that provides automated
capabilities to (1) understand objects, patterns, events and behaviors in vision data; (2) translate this understanding into
timely recognition of novel and anomalous entities; and (3) discover underlying hierarchies and relationships between
disparate labels entered by multiple users to provide a consistent data representation. VISINT is both a system and a
novel collection of novel bio-inspired algorithms/modules. These modules can be used independently for various aspects
of the actionable intelligence problem or sequenced together for an end-to-end actionable intelligence system. The
algorithms can be useful in many other applications such as scene understanding, behavioral analysis, automatic
surveillance systems, etc. The bio-inspired algorithms are a novel combination of hierarchical spatial and temporal
networks based on the Adaptive Resonance Theory (ART). The novel aspects of this work are that it is an end-to-end
system for actionable intelligence that combines existing and novel implementations of various modules in innovative
ways to develop a system concept for actionable intelligence. Although there are other algorithms/implementations of
several of the modules in VISINT, they suffer from various limitations and often system integration is not considered.
The overall VISINT system can be viewed an incremental learning system where no offline training is required and data
from multiple sources and times can be seamlessly integrated. The user is in the loop, but due to the semi-supervised
nature of the underlying algorithms, only significant variations of entities, not all false alarms, are shown to the user. It
does not forget the past even with new learning. While VISINT is designed as a vision-based system, it could also work
with other kinds of sensor data that can recognize and locate individual objects in the scene. Beyond that stage of object
recognition and localization, all aspects of VISINT are applicable to other kinds of sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work by the author, parameters across network protocol layers were selected as features in supervised
algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor
data. The algorithms improved the residual performance of the intrusion prevention measures provided by any
dynamic key-management schemes and trust models implemented among network nodes.
The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the
approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic.
Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic.
The approach can be augmented to add any security attributes and quantified trust levels, established during data
exchanges among nodes, to the set of cross-layer features from the WSN protocols.
A two-stage framework is introduced for the security algorithms to overcome the problems of input size and
resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data
packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support
vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first
stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed
computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in
securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers
involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting
parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log
data through resource-limited nodes and locates routines closer to that data.
Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding,
Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally
malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless sensor networks have become viable solutions to many commercial and military applications. This research
focuses on utilizing the I-TRM to develop an architecture which supports adaptive, self-healing, and self-aware
intelligent wireless sensor networks capable of supporting mobile nodes. Sensor subsystems are crucial in the
development of projects to test complex systems such as the Future Combat System, a multi-layered system consisting
of soldiers and 18 subsystems connected by a network. The proposed architecture utilizes the Sensor Web Enablement
(SWE), a standard for sensor networks being developed by the Open Geospatial Consortium (OGC), and the Integrated
Technical Reference Model (I-TRM), a multi-layered technical reference model consisting of a behavior-centric
technical reference model, information-centric technical reference model, and control technical reference model. The
designed architecture has been implemented on MPR2400CA motes using the nesC programming language. Preliminary
results show the architecture meets needs of systems such as the Future Combat System. The architecture supports
standard and tailored sensors, mobile and immobile sensors nodes, and is scalable. Also, functionality was implemented
which produces adaptive, self-healing, and self-aware behavior in the wireless sensor network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Daily sensor data volumes are increasing from gigabytes to multiple terabytes. The manpower and resources
needed to analyze the increasing amount of data are not growing at the same rate. Current volumes of diverse
data, both live streaming and historical, are not fully analyzed. Analysts are left mostly to analyzing the
individual data sources manually. This is both time consuming and mentally exhausting. Expanding data
collections only exacerbate this problem. Improved data management techniques and analysis methods are
required to process the increasing volumes of historical and live streaming data sources simultaneously. Improved
techniques are needed to reduce an analysts decision response time and to enable more intelligent and immediate
situation awareness. This paper describes the Sensor Data and Analysis Framework (SDAF) system built to
provide analysts with the ability to pose integrated queries on diverse live and historical data sources, and plug in
needed algorithms for upstream processing and filtering. The SDAF system was inspired by input and feedback
from field analysts and experts. This paper presents SDAF's capabilities, implementation, and reasoning behind
implementation decisions. Finally, lessons learned from preliminary tests and deployments are captured for
future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the problem of efficiently supplying high-level fusion services (situation and impact
assessment) with adequate information using semantic technology and formulate an optimization problem version of it.
We begin by discussing situation awareness and the need for computer tools that assist human analysts and decision
makers with their sense-making. Such tools are necessary in part because of the vast amount of information that is
available for analysis in today's command and control systems: the human operators need help to sort out the relevant
parts. This kind of filtering requirement is however not limited to humans: automatic or semi-automatic fusion tools also
need to limit the information they use in their processing. Simple such filtering could be done based on geographical
location, but as the number of advanced fusion services used in the command and control system increases, more
advanced techniques need to be used. We describe the information supply process when dealing with several (possibly
heterogeneous) sources of differing quality and describe the concepts of information view and information scope. We
describe how semantic queries can be used to achieve such filtering, and in particular describe this implemented for
Impactorium, a framework tool for situation and impact assessment developed by FOI. The threat models in
Impactorium previously relied solely on simple indicator tags for information supply. This can be done more robustly by
adding semantic queries to the threat models. The paper concludes with a summary and some discussion of future work
in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To evaluate security systems, the observer must be analyzed relative to the area of observation and targets expected to be
observed therein. The algorithm constructs a probability model for detection in both spatial and temporal references
within the area of observation. Targets must be identified with characteristics such as size and event-duration.
Additional criteria from a physical security risk assessment can characterize the area of observation and expected
targets. Inputs for the equations include base terrain coordinates in a regularly-gridded system, expected viewobstruction
changes across a time period, sensor and recording limitations, and events of interest. An event is an action
of a physical nature in which the observation platform is expected to detect, corroborate, or validate the occurrence.
Temporal and spatial models of observation system weaknesses are generated, and optimal sensor positioning is
computed under constraints. Incorporating this into the results allows for consideration that a sensor system may only
provide limited characteristics of an event to help identify objects or individuals within the scene. Consideration is given
to the ability to immediately respond to an event or place reliance on the system for evidentiary purposes. The rigorous
algorithm can be used to analyze physical site security; to identify improvements in observation platforms; and to
compute optimal sensor placement. Noted constraints of the current research include optical sensors, predictable
obstruction causes, and sensor-placement with characterized pan-tilt-zoom functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supporting human decision making in tasks such as disaster planning and time-sensitive targeting is challenging because
of the breadth and depth of knowledge that goes into the decision making process and the need to reason with this
knowledge within tight time constraints. Ontologies are well suited for representing the concepts that humans use in
describing the domain of interest. However, ontologies can be costly to develop and, by themselves, are inadequate to
capture the kinds of decision making knowledge that arise in practice-for instance, those that refer to multiple
ontologies or to established precedent. Such decision making knowledge can be represented by using a knowledge
representation formalism that we call decision rules. These decision rules are similar to the rules used in rule based
systems but can (a) include primitives from multiple ontologies and primitives that are defined by algorithms that run
outside of the rule framework (b) be time dependent and (c) incorporate default assumptions. We report on our ongoing
experience in using such a combination of ontologies and decision rules in building a decision support application for
time sensitive targeting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Context plays a significant role in situation resolution by intelligent agents (human or machine) by affecting how the
situations are recognized, interpreted, acted upon or predicted. Many definitions and formalisms for the notion of context
have emerged in various research fields including psychology, economics and computer science (computational
linguistics, data management, control theory, artificial intelligence and others). In this paper we examine the role of
context in situation management, particularly how to resolve situations that are described by using fuzzy (inexact)
relations among their components. We propose a language for describing context sensitive inexact constraints and an
algorithm for interpreting relations using inexact (fuzzy) computations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When to implement a standard and how much benefit would result from its implementation is
often a seat of the pants value judgment.
We will address lack of cost/benefit metrics for interoperability standards by presenting a
generalized model of the interoperability problem which defines the tasks required to implement an NxN
matrix of interoperating system types. The model is then used to assess the work load required to achieve
interoperability and quantify the extent to which the introduction of standards reduces the work load as a
function of delineated standards characteristics. Characteristics such as format, execution, speed,
bandwidth, and, must notably knowledge definition mechanisms are delineated. Standards effectiveness in
terms of task costs are then estimated as a function of standards characteristics, latent ambiguities, and
number interoperating nodes.
Use case studies of several standards and guidelines for standards effectiveness evaluation will be
discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the controllability of a Mecanum omnidirectional vehicle (ODV) is investigated. An adaptive drive
controller is developed that guides the ODV over irregular and unpredictable driving surfaces. Using sensor
fusion with appropriate filtering, the ODV gets an accurate perception of the conditions it encounters and then
adapts to them to robustly control its motion. Current applications of Mecanum ODVs are designed for use
on smooth, regular driving surfaces, and don't actively detect the characteristics of disturbances in the terrain.
The intention of this work is to take advantage of the mobility of ODVs in environments where they weren't
originally intended to be used. The methods proposed in this paper were implemented in hardware on an ODV.
Experimental results did not perform as designed due to incorrect assumptions and over-simplification of the
system model. Future work will concentrate on developing more robust control schemes to account for the
unknown nonlinear dynamics inherent in the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A chipless sensor tag-based radio frequency identification (RFID) technology that allows wireless collection of
information from the environment, and the monitoring and accessing of the given information through cyberspace is
presented. The developed system consists of a cyber enabled RFID reader and passive chipless RFID sensor tags. The
reader is comprised of an analog part that wirelessly communicates with the sensor tags, and a single board computer
(SBC) part. Each passive chipless sensor tag consists of a microstrip antenna and a sensor. The sensor information is
amplitude modulated in the backscattered signal of the tag. The analog reader part receives the backscattered signal and
feeds it to the SBC, which computes the sensor information into a 96 bit serialized global trade item number (SGTIN-96)
electronic product code (EPC). Moreover, the SBC makes the information available on a cyberspace-accessible secure
user interface. The reported system has been applied for temperature sensing, where the change in temperature at the tag
ranging from 27°C to 140°C resulted in a 28% amplitude change at the analog part of the reader. The temperature at the
tag has been monitored by accessing the reader through cyberspace using a web-based user interfaces developed for the
SBC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a hierarchical immunological model to detect bot activities in a computer network. In the proposed model
antibody (detector)-antigen (foreign object) reactions are defined using negative selection based approach and negative
systems-properties are defined by various temporal as well as non-temporal systems features. Theory of sequential
hypothesis testing has been used in the literature for identifying spatial-temporal correlations among malicious remote
hosts and among the bots within a botnet. We use it for combining multiple immunocomputing based decisions too.
Negative selection based approach defines a self and helps identifying non-selves. We define non-selves with respect to
various systems characteristics and then use different combinations of non-selves to design bot detectors. Each detector
operates at the client sites of the network under surveillance. A match with any of the detectors suggests presence of a
bot. Preliminary results suggest that the proposed model based solutions can improve the identification of bot activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection and patching of coverage holes in Wireless Sensor Networks (WSNs) are important measures of Quality of
Service (QoS) for security and other applications that emphasize sensor network coverage. In this paper, we model a
WSN using simplicial complexes based on its communication graph by which the network can be represented as
connections of sensor nodes without knowing exact locations of nodes. Thus, the coverage problem is converted to a
connectivity problem under some assumptions presented in the paper. We discuss two major topics in this paper, namely
sensor network coverage hole detection and patching. We present a novel, decentralized, coordinate-free, node-based
coverage hole detection algorithm. The algorithm can be implemented on a single node with connectivity information
gathered from one-hop away neighbors. Thus, the coverage hole detection algorithm can be run on individual nodes and
does not require time-consuming, centralized data processing. The hole-patching algorithm is based on the concept of
perpendicular bisector line. Every hole-boundary edge has a corresponding perpendicular bisector and new sensor nodes
are deployed on hole-boundary bisectors. Deployment of new sensor nodes maintains network connectivity, while
reduces coverage holes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting network intruders and malicious software is a significant problem for network administrators and security
experts. New threats are emerging at an increasing rate, and current signature and statistics-based techniques are failing
to keep pace. Intelligent systems that can adapt to new threats are needed to mitigate these new strains of malware as
they are released. This research develops a system that uses contextual relationships and information across different
layers of abstraction to detect malware based on its qualia, or essence. By looking for the underlying concepts that make
a piece of software malicious, this system avoids the pitfalls of static solutions that focus on predefined signatures or
anomaly thresholds. If successful, this type of qualia-based system would provide a framework for developing intelligent
classification and decision-making systems for any number of application areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.