PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Novel methods of detecting cyber attacks on networks have been developed that are able to detect an increasing diverse variety of malicious cyber-events. However, this has only resulted in additional information burden on the network analyst. The integration of the distributed evidence from multiple sources is missing or ad-hoc at best. Only with the fusion of the multi-source evidence can we reason at a higher semantic level to detect and
identify attacks and attackers. Further, integration at a higher semantic level will reduce the cognitive load on the security offcer and will make it possible for reasonable responses. This paper presents an overview of the D-Force system that uses a Bayesian Evidential Framework for fusing the multi-source evidence in a network to detect and recognize attacks. Attack hypothesis are generated as a result of evidence at the different network and
host sensors. The hypotheses are verified or denied with additional evidence. Based on our initial experiments and tests the D-Force system promises to be a powerful tool in the information security offcers arsenal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation awareness has emerged as an important concept in military and public security environments. Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness for the decision maker(s). It is well established that information fusion, defined as the process of utilizing one or more information sources over time to assemble a representation of aspects of interest in an environment, is a key enabler to meeting the demanding requirements of situation analysis. However, although information fusion is important, developing and adopting a knowledge-centric view of situation analysis should provide a more holistic perspective of this process. This is based on the notion that awareness ultimately has to do with having knowledge of something. Moreover, not all of the situation elements and relationships of interest are directly observable. Those aspects of interest that cannot be observed must be inferred, i.e., derived as a conclusion from facts or premises, or by reasoning from evidence. This paper discusses aspects of knowledge, and how it can be acquired from experts, formally represented and stored in knowledge bases to be exploited by computer programs, and validated. Knowledge engineering is reviewed, with emphasis given to cognitive and ontological engineering. Facets of reasoning are discussed, along with inferencing methods that can be used in computer applications. Finally, combining elements of information fusion and knowledge-based systems, an overall approach and framework for the building of situation analysis support systems is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to achieve greater situation awareness it is necessary to identify relations between individual entities and their immediate surroundings, neighboring entities and important landmarks. The idea is that long-term intentions and situations can be identified by patterns of more rudimentary behavior, in essence situations formed by combinations of different basic relationships. In this paper we present a rule based situation assessment system that utilizes both COTS and in-house software. It is built upon an agent framework that speeds up development times, since it takes care of many of the infrastructural issues of such a communication intense application as this is, and a rule based reasoner that can reason about situations that develop over time. The situation assessment system is developed to be simple, but structurally close to an operational system, with connections to outside data sources and graphical editors and data displays. It is developed with a specific simple Sea-surveillance scenario in mind, which we also present, but the ideas behind the system are general and are valid for other areas as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation Awareness (SA) problems all require an understanding of current activities, an ability to anticipate what may happen next, and techniques to analyze the threat or impact of current activities and predictions. These processes of SA are common regardless of the domain and can be applied to the detection of cyber attacks. This paper will describe the application of a SA framework to implementing Cyber SA, describe some metrics for measuring and evaluating systems implementing Cyber SA, and discuss ongoing work in this area. We conclude with some ideas for future activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Engine for Real-time Decision Making (INFERD) is a tool that was developed to supplement current graph matching techniques in Information Fusion models. Based on sensory data and a priori models, INFERD dynamically generates, evolves, and evaluates hypothesis on the current state of the environment. The a priori models developed are hierarchical in nature lending them to a multi-level Information Fusion process whose primary output provides a situational awareness of the environment of interest in the context of the models running. In this paper we look at INFERD's multi-level fusion approach and provide insight on the inherent problems such as fragmentation in the approach and the research being undertaken to mitigate those deficiencies. Due to the large variance of data in disparate environments, the awareness of situations in those environments can be drastically different. To accommodate this, the INFERD framework provides support for plug-and-play fusion modules which can be developed specifically for domains of interest. However, because the models running in INFERD are graph based, some default measurements can be provided and will be discussed in the paper. Among these are a Depth measurement to determine how much danger is presented by the action taking place, a Breadth measurement to gain information regarding the scale of an attack that is currently happening, and finally a Reliability measure to tell the user the credibility of a particular hypothesis. All of these results will be demonstrated in the Cyber domain where recent research has shown to be an area that is welldefined and bounded, so that new models and algorithms can be developed and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intent inference is about analyzing the actions and activities of an adversarial force or target of interest to reach a conclusion (prediction) on its purpose. In this paper, we report one of our research works on intent inference to determine the likelihood of an attack aircraft being tracked by a military surveillance system delivering its weapon. Effective intent inference will greatly enhance the defense capability of a military force in taking preemptive action against potential adversaries. It serves as early warning and assists the commander in his decision making. For an air defense system, the ability to accurately infer the likelihood of a weapon delivery by an attack aircraft is critical. It is also important for an intent inference system to be able to provide timely inference. We propose a solution based on the analysis of flight profiles for offset pop-up delivery. Simulation tests are carried out on flight profiles generated using different combinations of delivery parameters. In each simulation test, the state vectors of the tracked aircraft are updated via the application of the Interacting Multiple Model filter. Relevant variables of the filtered track (flight trajectory) are used as inputs to a Mamdani-type fuzzy inference system. The output produced by the fuzzy inference system is the inferred possibility of the tracked aircraft carrying out a pop-up delivery. We present experimental results to support our claim that the proposed solution is indeed feasible and also provides timely inference that will assist in the decision making cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory.
We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The changing face of contemporary military conflicts has forced a major shift of focus in tactical planning and evaluation from the classical Cold War battlefield to an asymmetric guerrilla-type warfare in densely populated urban areas. The new arena of conflict presents unique operational difficulties due to factors like complex mobility restrictions and the necessity to preserve civilian lives and infrastructure. In this paper we present a novel method for autonomous agent control in an urban environment. Our approach is based on fusing terrain information and agent goals for the purpose of transforming the problem of navigation in a complex environment with many obstacles into the easier problem of navigation in a virtual obstacle-free space. The main advantage of our approach is its ability to act as an adapter layer for a number of efficient agent control techniques which normally show poor performance when applied to an environment with many complex obstacles. Because of the very low computational and space complexity at runtime, our method is also particularly well suited for simulation or control of a huge number of agents (military as well as civilian) in a complex urban environment where traditional path-planning may be too expensive or where a just-in-time decision with hard real-time constraints is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the progress of an ongoing research effort in multisource information fusion for biodefense decision support. The effort concentrates on a novel machine-intelligence hybrid-of-hybrids decision support architecture termed FLASH (Fusion, Learning, Adaptive Super-Hybrid) we proposed. The highlights of FLASH discussed in the paper include its cognitive-processing orientation and the hybrid nature involving heterogeneous multiclassifier machine learning and approximate reasoning paradigms. Selected specifics of the FLASH internals, such as its feature selection techniques, supervised learning, clustering, recognition and reasoning methods, and their integration, are discussed. The results to date are presented, including the background type determination and bioattack detection computational experiments using data obtained with a multisensor fusion testbed we have also developed. The processing of imprecise information originating from sources other than sensors is considered. Finally, the paper discusses applicability of FLASH and its methods to complex battlespace management problems such as course-of-action decision support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the road travel time estimation on an urban axis by classification method based on evidence theory. The travel time (TT) indicator can be used either for traffic management or for drivers' information. The information used to estimate the travel time (induction loop sensor, cameras, probe vehicle,...) is complementary and redundant. It is then necessary to implement strategies of multi-sensors data fusion. The selected framework is the evidence theory. This theory takes more into account the imprecision and uncertainty of multisource information. Two strategies were implemented. The first one is classifier fusion where each information source, was considered as a classifier. The second approach is a distance-based classification for belief functions modelling. Results of these approaches, on data collected on an urban axis in the South of France, show the outperformance of fusion strategies within this application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system was designed to locate and correct errors in large transcribed corpora. The program, called CommonSense, relies on a set of rules that identify mistakes related to homonyms, words with distinct definitions but identical pronunciations. The system was run on the 1996 and 1997 Broadcast News Speech Corpora, and correctly identified more than 400 errors in these data. Future work may extend CommonSense to automatically correct errors in hypothesis files created as the output of speech recognition systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Security systems increasingly rely on the use of Automated Video Surveillance (AVS) technology. In particular the use of digital video renders itself to internet and local communications, remote monitoring, and to computer processing. AVS systems can perform many tedious and repetitive tasks currently performed by trained security personnel. AVS technology has already made some significant steps towards automating some basic security functions such as: motion detection, object tracking and event-based video recording. However, there are still many problems associated with just these automated functions, which need to be addressed further. Some examples of these problems are: the high "false alarm rate" and the "loss of track" under total or partial occlusion, when used under a wide range of operational parameters (day, night, sunshine, cloudy, foggy, range, viewing angle, clutter, etc.). Current surveillance systems work well only under a narrow range of operational parameters. Therefore, they need be hardened against a wide range of operational conditions. In this paper, we present a Multi-spectral fusion approach to perform accurate pedestrian segmentation under varying operational parameters. Our fusion method combines the "best" detection results from the visible images and the "best" from the thermal images. Commonly, the motion detection results in the visible images are easily affected by noise and shadows. The objects in the thermal image are relatively stable, but they may be missing some parts of the objects, because they thermally blend with the background. Our method makes use of the "best" object components and de-emphasize the "not best".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we utilize the Projection-Slice Synthetic Discriminant Function Filters, PSDF, in concert with an Independent Component Analysis technique to simultaneously reduce the data set that represents each of the training images and to emphasize subtle differences in each of the training images. These differences are encoded into the PSDF in order to improve the filter sensitivity for the recognition and identification of protein images formed from a cryo-electron microscopic imaging process. The PSDF and Independent Component Analysis provide a premise not only for the identification of the class of structures under consideration, but also for detecting the orientation of the structures in these images. The protein structures found in cryo-electron microscopic imaging represent a class of objects that have low resolution and contrast and subtle variation. This poses a challenge in design of filters to recognize these structures due to false targets that often have similar characteristics as the protein structures. The incorporation of a component analysis and eigen values conditioning in forming the filter provides an enhanced approach of de-correlating images prior to their incorporation into the filter. We present our method of filter synthesis and the results of the application of this modified filter to a protein structure recognition problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Faculty from the University of Tennessee at Chattanooga and the University of Tennessee College of Medicine, Chattanooga Unit, have used data mining techniques and neural networks to examine a set of fourteen features, data items, and HUMINT assessments for 2,148 emergency room patients with symptoms possibly indicative of Acute Coronary Syndrome. Specifically, the authors have generated Bayesian networks describing linkages and causality in the data, and have compared them with neural networks. The data includes objective information routinely collected during triage and the physician's initial case assessment, a HUMINT appraisal. Both the neural network and the Bayesian network were used to fuse the disparate types of information with the goal of forecasting thirty-day adverse patient outcome. This paper presents details of the methods of data fusion including both the data mining techniques and the neural network. Results are compared using Receiver Operating Characteristic curves describing the outcomes of both methods, both using only objective features and including the subjective physician's assessment. While preliminary, the results of this continuing study are significant both from the perspective of potential use of the intelligent fusion of biomedical informatics to aid the physician in prescribing treatment necessary to prevent serious adverse outcome from ACS and as a model of fusion of objective data with subjective HUMINT assessment. Possible future work includes extension of successfully demonstrated intelligent fusion methods to other medical applications, and use of decision level fusion to combine results from data mining and neural net approaches for even more accurate outcome prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interest in the distribution of processing in unattended ground sensing (UGS) networks has resulted in new technologies and system designs targeted at reduction of communication bandwidth and resource consumption through managed sensor interactions. A successful management algorithm should not only address the conservation of resources, but also attempt to optimize the information gained through each sensor interaction so as to not significantly deteriorate target tracking performance. This paper investigates the effects of Distributed Cluster Management (DCM) on tracking performance when operating in a deployed UGS cluster. Originally designed to reduce communications bandwidth and allow for sensor field scalability, the DCM has also been shown to simplify the target tracking problem through reduction of redundant information. It is this redundant information that in some circumstances results in secondary false tracks due to multiple intersections and increased uncertainty during track initiation periods. A combination of field test data playback and Monte Carlo simulations are used to analyze and compare the performance of a distributed UGS cluster to that of an unmanaged centralized cluster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of selecting features to improve automated video tracking of targets that undergo multiple mutual occlusions. As targets are occluded, different feature subsets and combinations of those features are effective in identifying the target and improving tracking performance. We use Combinatorial Fusion Analysis to develop a metric to dynamically select which subset of features will produce the most accurate tracking. In particular we show that the combination of a pair of features A and B will improve the accuracy only if (a) A and B have relative high performance, and (b) A and B are diverse. We present experimental results to illustrate the performance of the proposed metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes technology being developed at 21st Century Technologies to automate Computer Network Operations (CNO). CNO refers to DoD activities related to Attacking and Defending Computer Networks (CNA & CND). Next generation cyber threats are emerging in the form of powerful Internet services and tools that automate intelligence gathering, planning, testing, and surveillance. We will focus on "Search-Engine Hacks", queries that can retrieve lists of router/switch/server passwords, control panels, accessible cameras, software keys, VPN connection files, and vulnerable web applications. Examples include "Titan Rain" attacks against DoD facilities and the Santy worm, which identifies vulnerable sites by searching Google for URLs containing application-specific strings. This trend will result in increasingly sophisticated and automated intelligence-driven cyber attacks coordinated across multiple domains that are difficult to defeat or even understand with current technology. One traditional method of CNO relies on surveillance detection as an attack predictor. Unfortunately, surveillance detection is difficult because attackers can perform search engine-driven surveillance such as with Google Hacks, and avoid touching the target site. Therefore, attack observables represent only about 5% of the attacker's total attack time, and are inadequate to provide warning. In order to predict attacks and defend against them, CNO must also employ more sophisticated techniques and work to understand the attacker's Motives, Means and Opportunities (MMO). CNO must use automated reconnaissance tools, such as Google, to identify information vulnerabilities, and then utilize Internet tools to observe the intelligence gathering, planning, testing, and collaboration activities that represent 95% of the attacker's effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new family of approaches to fuse inconsistent knowledge sources is introduced in a standard logical setting. They combine two preference criteria to arbitrate between conflicting information: the minimization of falsified formulas and the minimization of the number of the different atoms that are involved in those formulas. Although these criteria exhibit a syntactical flavor, the approaches are semantically-defined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection is an important task in remotely monitoring and diagnosing equipment and other processes. Specifically, early detection of differences that indicate abnormal conditions has the promise to provide considerable savings in averting secondary damage and preventing system outage. Of course, accurate early detection has to be balanced against the successful rejection of false positive alarms. In noisy environments, such as aircraft engine monitoring, this proves to be a difficult undertaking for any one algorithm. In this paper, we investigate the performance improvement that can be gained by aggregating the information from a set of diverse change detection algorithms. Specifically, we examine a set of change detectors that utilize a variety of different techniques such as neural nets, random forests, and support vector machines. The different techniques have different detection sensitivities and different regression technique that operates well for time series as well as averaging schemes, and a meta-classifiers. We provide results using illustrative examples from aircraft engine monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-Sensor system has been proposed to measure electric current in a non-contact way. According to the Ampere's
Law, the value of the current flowing in a conductor can be obtained through processing the outputs of the magnetic
sensors around the conductor. As the discrete form of the Ampere's Law is applied, measurement noises are introduced
when there exists the interference magnetic field induced by nearby current flowing conductors. In this paper, the
measurement noises of the multi-sensor system measuring DC current are examined to reveal the impact of the
interference magnetic field and the number of the magnetic sensors on the measurement accuracy. A noise reduction
method based on Kalman filtering is presented. Computer simulation and experiment results show that the method
greatly improves the accuracy without seriously increasing the computation load in comparison with other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current practice for combating cyber attacks typically use Intrusion Detection Sensors (IDSs) to passively detect and block multi-stage attacks. This work leverages Level-2 fusion that correlates IDS alerts belonging to the same attacker, and proposes a threat assessment algorithm to predict potential future attacker actions. The algorithm, TANDI, reduces the problem complexity by separating the models of the attacker's capability and opportunity, and fuse the two to determine the attacker's intent. Unlike traditional Bayesian-based approaches, which require assigning a large number of edge probabilities, the proposed Level-3 fusion procedure uses only 4 parameters. TANDI has been implemented and tested with randomly created attack sequences. The results demonstrate that TANDI predicts future attack actions accurately as long as the attack is not part of a coordinated attack and contains no insider threats. In the presence of abnormal attack events, TANDI will alarm the network analyst for further analysis. The attempt to evaluate a threat assessment algorithm via simulation is the first in the literature, and shall open up a new avenue in the area of high level fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key enabler for Network Centric Warfare (NCW) is a sensor network that can collect and fuse vast amounts
of disparate and complementary information from sensors that are geographically dispersed throughout the
battlespace. This information will lead to better situation awareness so that commanders will be able to act faster
and more effectively. However, these benefits are possible only if the sensor data can be fused and synthesized for
distribution to the right user in the right form at the right time within the constraints of available bandwidth.
In this paper we consider the problem of developing Level 1 data fusion algorithms for disparate fusion in
NCW. These algorithms must be capable of operating in a fully distributed (or decentralized) manner; must be
able to scale to extremely large numbers of entities; and must be able to combine many disparate types of data.
To meet these needs we propose a framework that consists of three main components: an attribute-based
state representation that treats an entity state as a collection of attributes, new methods or interpretations of
uncertainty, and robust algorithms for distributed data fusion. We illustrate the discussion in the context of
maritime domain awareness, mobile adhoc networks, and multispectral image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new architecture for fusing information and data from heterogeneous sources is proposed. The approach takes criminalistics as a model. In analogy to the work of detectives, who attempt to investigate crimes, software agents are initiated that pursue clues and try to consolidate or to dismiss hypotheses. Like their human pendants, they can, if questions beyond their competences arise, consult expert agents. Within the context of a certain task, region, and time interval, specialized operations are applied to each relevant information source, e.g. IMINT, SIGINT, ACINT,..., HUMINT, data bases etc. in order to establish hit lists of first clues. Each clue is described by its pertaining facts, uncertainties, and dependencies in form of a local degree-of-belief (DoB) distribution in a Bayesian sense. For each clue an agent is initiated which cooperates with other agents and experts. Expert agents support to make use of different information sources. Consultations of experts, capable to access certain information sources, result in changes of the DoB of the pertaining clue. According to the significance of concentration of their DoB distribution clues are abandoned or pursued further to formulate task specific hypotheses. Communications between the agents serve to find out whether different clues belong to the same cause and thus can be put together. At the end of the investigation process, the different hypotheses are evaluated by a jury and a final report is created that constitutes the fusion result.
The approach proposed avoids calculating global DoB distributions by adopting a local Bayesian approximation and thus reduces the complexity of the exact problem essentially.
Different information sources are transformed into DoB distributions using the maximum entropy paradigm and considering known facts as constraints. Nominal, ordinal and cardinal quantities can be treated within this framework equally. The architecture is scalable by tailoring the number of agents according to the available computer resources, to the priority of tasks, and to the maximum duration of the fusion process. Furthermore, the architecture allows cooperative work of human and automated agents and experts, as long as not all subtasks can be accomplished automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alarm-based sensor systems are being explored as a tool to expand perimeter security for facilities and force protection. However, the collection of increased sensor data has resulted in an insufficient solution that includes faulty data points. Data analysis is needed to reduce nuisance and false alarms, which will improve officials' decision making and confidence levels in the system's alarms. Moreover, operational costs can be allayed and losses mitigated if authorities are alerted only when a real threat is detected. In the current system, heuristics such as persistence of alarm and type of sensor that detected an event are used to guide officials' responses. We hypothesize that fusing data from heterogeneous sensors in the sensor field can provide more complete situational awareness than looking at individual sensor data. We propose a two stage approach to reduce false alarms. First, we use self organizing maps to cluster sensors based on global positioning coordinates and then train classifiers on the within cluster data to obtain a local view of the event. Next, we train a classifier on the local results to compute a global solution. We investigate the use of machine learning techniques, such as k-nearest neighbor, neural networks, and support vector machines to improve alarm accuracy. On simulated sensor data, the proposed approach identifies false alarms with greater accuracy than a weighted voting algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently XML heterogeneity has become a new challenge. In this paper, a novel clustering strategy is proposed to regroup these heterogeneous XML sources, for searching in a relatively smaller space with certain similarity can reduce cost. The strategy consists of four steps. We at first extract features about paths and map them into High-dimension Vector Space (HDVS). In the data pre-process, two algorithms are applied to diminish the redundancies in XML sources. Then heterogeneous documents are clustered. Finally, Multivalued Dependency (MVD) is introduced, for MVD can be redefined according to the range of constraints of XML. This paper also proposes a novel algorithm that discovering minimal MVD, based on the rough set handling non-integrity data. It can solve the problem that non-integrity data of XML influence on finding the MVD of XML, thus patterns can be extracted from each cluster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is a technique for precisely aligning the content of two or more images. It is often used as a preprocessing stage for further analysis, such as automatic target recognition, change detection, and environmental remote sensing. However, there are many different registration algorithms available to the image analyst, and it's difficult to know which one is the best one to use for a particular pair of images. These various algorithms also have a multitude of settings and parameters that must be given proper values for best results. Consequently, it is often difficult to know which algorithm will perform the best in a given situation, under constraints of time or accuracy. We propose constructing an expert system, with rules based on experimental results, that will automatically select the appropriate registration algorithm and perform appropriate preprocessing steps to prepare the images for registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intensity based image registration is one of the most popularly used methods for automatic image registration. In the recent past, various improvements have been suggested, ranging from variation in the similarity metrics (Correlation Ratio, Mutual Information, etc.) to improvement in the interpolation techniques. The performance of one method over the other is observed either from the final results of registration or visual presence of artifacts in the plots of the objective function (similarity metric) vs. the transformation parameters. None of these are standard representations of the quality of improvement. The final results are not indicative of the effect of the suggested improvement as it depends on various other components of the registration process. Also visual assessment of the presence of artifacts is feasible only when the number of parameters in the transformation involved are less than or equal to two. In this paper, we introduce a novel approach and a metric to quantify the presence of artifacts, which in turn determines the performance of the registration algorithm. This metric is based on the quality of objective-function landscape. Unlike, the already existing methods of comparison, this metric provides a quantitative measure that can be used to rank different algorithms. In this paper, we compare and rank different interpolation techniques based on this metric. Our experimental results show that the relative ordering provided by the metric is consistent with the observation made by traditional approaches like visual interpretation of the similarity metric plot. We also compare and compute the proposed metric for different variants of intensity-based registration methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MeRIS was launched in March 2002 and has been providing images since June 2002. Before its launch, we had implemented a method to improve its resolution by merging its images with Landsat ETM images in order to preserve the best characteristics of the two images (spatial, spectral, temporal). We now present the results of this method for real MeRIS images (level 1b and 2) in a coastal area. The robustness of the method is studied as well as the influence of the delay between the acquisitions of the two images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel ICA domain multimodal image fusion algorithm. Conventional eigenbased
algorithm was improved through weighting the transformed regions of input images and the use of a fusion
metric to maximise the quality of the fused image. It is confirmed by experimental results that the proposed
methods outperform the basic eigenbased methods and, in most cases, our own prior work on the DT-CWT
method. For all the standard image fusion metrics, the proposed method obtains higher quality values than the
standard eigenbased algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proposed new fusion algorithm is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron is the same and its value is chosen through experimentation, this algorithm uses the contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel image fusion method based on the finite ridgelet transform(FRIT). Firstly, the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing is analyzed. Secondly, the principal of FRIT and its good performance in expressing the singularity of two or higher dimensional are studied. Finally, the feasibility of image fusion using FRIT is discussed in detail. A new fusion method based on FRIT and the fusion framework are proposed. The transform coefficients structure and the fusion procedure are given in detail in this paper. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid methods do in image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a new image fusion method based on the contourlet transform. Firstly, the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing is analyzed. Secondly, the principal of Contourlet and its good performance in expressing the singularity of two or higher dimensional are studied. Finally, the feasibility of image fusion using contourlet transform is discussed in detail. A new fusion method based on Contourlet transform and the fusion framework are proposed. The transform coefficients structure and the fusion procedure are given in detail in this paper. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid methods do in image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.