PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The potential problem of deterioration in recognition system performance because of imprecise, incomplete or imperfect training is a serious challenge inherent to most-real-world applications. This problem is often referred to in certain applications as degradation of performance under off-nominal conditions. This study presents the result of an investigation carried out to illustrate the scope and benefits of information fusion in such off-nominal scenarios. The research covers features in-decision out (FEI-DEO) fusion as well as decisions in-decision out fusion (DEI-DEO). The latter spans across both information sources and multiple processing tools (classifiers). The investigation delineates the corresponding fusion benefit domains using as an example, real-world data from an audio-visual system for the recognition of French oral vowels embedded in carious levels of acoustical noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to propose a strategy that uses data fusion at three different levels to gradually improve the performance of an identity verification system. In a first step temporal data fusion can be used to combine multiple instances of a single (mono-modal) expert to reduce its measurement variance. If system performance after this first step is not good enough to satisfy the end-user's needs, one can improve it by fusing in a second step result of multiple experts working on the same modality. For this approach to work, it is supposed that the respective classification errors of the different experts are de-correlated. Finally, if the verification system's performance after this second step is still not good enough, one will be forced to move onto the third step in which performance can be improved by using multiple experts working on different (biometric) modalities. To be useful however, these experts have to be chosen in such a way that adding the extra modalities increases the separation in the multi-dimensional modality-space between the distributions of the different populations that have to be classified by the system. This kind of level-based strategy allow to gradually tune the performance of an identity verification system to the end-user's requirements while controlling the increase of investment costs. In this paper results of several fusion modules will be shown at each level. All experiments have been performed on the same multi-modal database to be able to compare the gain in performance each time one goes up a level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fuzzy set methods can improve the fusion of uncertain sensor data. The expected output membership function (EOMF) method uses the fuzzified inputs and possible fuzzy outputs to estimate the fused output. The most likely fuzzy output comes from fusability measures which are calculated using the degrees of the intersections of the possible fuzzy outputs with the fuzzified inputs. The support lengths of the fuzzified inputs can be set proportional to the sensor variance in the fixed case. However, individual measurements can deviate widely from the true value even in accurate sensors. The support length of input sets can be varied by estimating the variation of the input. This adaptation helps deal with occasional bad or noisy measurements. The variation is defined as the absolute change rate of the input with respect to previous output estimates. The variation is defined as the absolute change rate of the input with respect to previous output estimates. The EOMF can also be too wide or too narrow compared to the fuzzified inputs. Adaptive methods can help select the size of the EOMF. An example from the control of automated vehicles shows the effectiveness of the adaptive EOMF method, compared to the fixed EOMF method and the weighted average method. The EOMF method shows robustness to outlying measurements when the average fusion operator is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss virtual associative networks and their relevance for addressing computationally prohibitive sensor fusion problems. To our knowledge, this discussion of VAN technology for Sensor Fusion is unique and our current results involving VANs for Dynamic Sensor Management is the first of its kind. THe following provides methodology, results, and extensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a methodology for fuzzy sensor fusion. We then apply this methodology to sensor data from a gas turbine power plant. The developed fusion algorithm tackles several problems: 1) It aggregates redundant sensor information; this allows making decision which sensors should be considered for propagation of sensor information. 2) It filters out noise and sensor failure from measurements; this allows a system to operate despite temporary or permanent failure of one or more sensors. For the fusion, we use a combination of direct and functional redundancy. The fusion algorithm uses confidence values obtained for each sensor reading form validation curves and performs a weighted average fusion. With increasing distance from the predicted value, readings are discounted through a non-linear validation function. They are assigned a confidence value accordingly. The predicted value in the described algorithm is obtained through application of a fuzzy exponential weighted moving average time series predictor with adaptive coefficients. Experiments on real data from a gas turbine power plant show the robustness of the fusion algorithm which leads to smooth controller input values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent Transportation Systems (ITS) implemented all over the world, has become an important and practical traffic management technique. Among all ITS subsystems, the detection system plays an integral element that provides all the necessary environmental information to the ITS infrastructure. This paper describes the ITS Detector testbed design, currently being implemented with these potential ITS applications on the State Highway 6 in College Station, Texas to provide a multi-sensor, multi-source fusion environment that utilizes both multi-sensor and distributed sensor system testing environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the navigation problem for an autonomous robot designed to inspect sewerage. The robot, which is about half size of the sewer diameter, must keep its orientation in the entirely dark sewerage. The only a priory information known is the geometrical shape of the sewer. This implies a strong geometrical constraint on the environment the robot can expect. In the paper we worked out an active vision system to be used on-board of the sewer robot. The system has two components: (a) an optical camera and (b) a laser cross hair projector generating an ideal cross. The print left by the laser cross hair projector on the pipe surface is snapped with the camera. The image of the 'cross trace' on the camera plane is two intersecting quadratic curves. The shape of the curves gives a clue about a direction the robot looks. In this paper ewe investigate the curves that are the images of the laser 'cross traces' as they are seen on the camera plane for a simulated environmental model. We show how the shape of the curves viewed by the camera depends upon particular camera/laser relative position and orientation, assuming a cylindrical sewer pipe. We give a strategy how to align the robot with the sewer axis on the basis of curve images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision is an excellent example of the rich interplay between computational and biological approaches to the understanding of complex information processing tasks. Studies of biological solutions to the computational problems of vision, such as contrast masking, movement detection, orientation selectivity has created many controversies in visual neuroscience. Recent neurobiological findings suggest an experimental paradigm that gives emphasis to strategies, which rely on the combined activities of cells or cell assembly for information transforms. This new perspective explains an integrated synaptic facilitation that is contingent upon the emergent spatial and temporal properties of cell activities. The paper briefly presents a novel biologically inspired adaptive architecture that can serve for analysis of cell response dynamics to encode analog visual sensory data under varying conditions. The key will be the active representation of visual objects temporal characteristics, i.e., the exposures time and the syntactic structure to achieve invariance for the fundamental problems of scene segmentation and figure-ground separation. The basic neural mechanism is that of plastic relationship between and within participating cells or cellular groups with known receptive field organizations. Our system behavior is tested with numerous parametric psychophysical data, and the selected simulation samples predict: Only the active integration from multiple exposure to the sequence of sensory visual information can yield a reliable encode to extract salient features of visual objects, in partially unknown and possibly changing environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical elements of future exoatmospheric interceptor systems are intelligent processing techniques which can effectively combine sensor data from disparate sensors. This paper summarizes the impact on discrimination performance of several feature and classifier fusion techniques, which can be used as part of the overall IP approach. These techniques are implemented either within the fused sensor discrimination testbed, or off-line as building blocks that can be modified to assess differing fusion approaches, classifiers and their impact on interceptor requirements. Several optional approaches for combining the data at the different levels, i.e., feature and classifier levels, are discussed in this paper and a comparison of performance results is shown. Approaches yielding promising results must still operate within the timeline and memory constraints on board the interceptor. A hybrid fusion approach is implemented at the feature level through the use of feature sets input to specific classifiers. The output of the fusion process contains an estimate of the confidence in the data and the discrimination decisions. The confidence in the data and decisions can be used in real time to dynamically select different sensor feature data, classifies, or to request additional sensor data on specific objects that have not been confidently identified as 'lethal' or 'non-legal'. However, dynamic selection requires an understanding of the impact of various combinations of feature sets and classifier options. Accordingly, the paper presents the various tools for exploring these options and illustrates their usage with data sets generated to realistically simulate the world of Ballistic Missile Defense interceptor applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Dempster Shafer theory of evidential reasoning may be useful in handling issues associated with theater ballistic missile discrimination. This paper highlights the Dempster- Shafer theory and describes how this technique was implemented and applied to data collected by two IR sensor on a recent flight test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Night Vision and Electronics Sensors Directorate, Survivability/Camouflage, Concealment and Deception Division mission is to provide affordable aircraft and ground electronic sensor/systems and signature management technologies which enhance survivability and lethality of US and International Forces. Since 1992, efforts have been undertaken in the area of Situational Awareness and Dominant Battlespace Knowledge. These include the Radar Deception and Jamming Advanced Technology Demonstration (ATD), Survivability and Targeting System Integration, Integrated Situation Awareness and Targeting ATD, Combat Identification, Ground Vehicle Situational Awareness, and Combined Electronic Intelligence Target Correlation.
This paper will address the Situational Awareness process as it relates to the integration of Electronic Warfare (EW) with targeting and intelligence and information warfare systems. Discussion will be presented on the Sensor Fusion, Situation Assessment and Response Management Strategies. Sensor Fusion includes the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats as well as their significance. Situation Assessment includes the process of interpreting and expressing the environmnet based on situation abstract products and information from technical and doctrinal data bases. Finally, Response Management provides the centralized, adaptable control of all renewable and expendable countermeasure assets resulting in optimization of the response to the threat environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents results from an Adaptable Data Fusion Testbed (ADFT) which has been constructed to analyze simulated or real data with the help of modular algorithms for each of the main fusion functions and image interpretation algorithms. The result obtained from data fusion of information coming from an imaging Synthetic Aperture Radar (SAR) and non-imaging sensors (ESM, IFF, 2-D radar) on-board an airborne maritime surveillance platform are presented for two typical scenarios of Maritime Air Area Operations and Direct Fleet Support. An extensive set of realistic databases has been created that contains over 140 platforms, carrying over 170 emitters and representing targets from 24 countries. A truncated Dempster-Shafer evidential reasoning scheme is used that proves robust under countermeasures and deals efficiently with uncertain, incomplete or poor quality information. The evidential reasoning scheme can yield both single ID with an associated confidence level and more generic propositions of interest to the Commanding Officer. For nearly electromagnetically silent platforms, the Spot Adaptive mode of the SAR, which is appropriate for naval targets, it is shown to be invaluable in providing long range features that are treated by a 4-step classifier to yield ship category, type and class. Our approach of reasoning over attributes provided by the imagery will alloy the ADFT to process in the next phase (currently under way) both FLIR imagery and SAR imagery in different modes (RDP for naval targets, Strip Map and Spotlight Non-Adaptive for land targets).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A subpixel-resolution image registration algorithm based on the nonlinear projective transformation model is proposed to account for camera translation, rotation, zoom, pan, and tilt. Typically, parameter estimation techniques for rigid- body transformation require the user to manually select feature point pairs between the images undergoing registration. In this research, the block matching algorithm is used to automatically select correlated feature point pairs between two images; these features are ten used to calculate an iterative least squares estimate of the nonlinear projective transformation parameters. Since block matching is only capable of estimating accurate displacement vectors in image regions containing a large number of edges, inaccurate feature point pairs are statistically eliminated prior to computing the least squares parameter estimate. Convergence of the registration algorithm is generally achieved in several iterations. Simulations show that the algorithm estimates accurate integer- and subpixel- resolution registration parameters for similar sensor data sets such as intensity image sequence frames, as well as for dissimilar sensor images such as multimodality slices from the Visible Human Project. Through subpixel-resolution registration, integrating the registered pixels form a short sequence of low-resolution video frames generates a high- resolution video still. Experimental results are also shown in utilizing dissimilar data registration followed by vector quantization to segment tissues from multimodality Visible Human Project image slices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach to automatically register IR and millimeter wave images for concealed weapons detection application. The distortion between the two images is assumed to be a rigid body transformation and we assume that the scale factor can be calculated from the sensor parameters and the ratio of the two distances from the object to the imagers. Therefore, the pose parameters that need to be found are x-displacement and y-displacement only. Our registration procedure involves image segmentation, binary correlation and some other image processing algorithms. Experimental results indicate that the automatic registration procedure performs fairly well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of sensors are being developed for the Concealed Weapon Detection, and use of the appropriate sensor or combination of sensor will be very important to the success of such technologies. Assuming that two identical sensors are used to collect data on a target from different angular views, this paper addresses the problem of registration associated with the collected scenes. Theory and application to real data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The area-based methods, such as using Laplacian pyramid and Fourier transform-based phase matching, benefit by highlighting high spatial frequencies to reduce sensitivity to the feature inconsistency problem in the multisensor image registration. The feature extraction and matching methods are more powerful and versatile to process poor quality IR images. We implement multi-scale hierarchical edge detection and edge focusing and introduce a new salience measure for the horizon, for multisensor image registration. The common features extracted from images of two modalities can be still different in detail. Therefore, the transformation space match methods with the Hausdorff distance measure is more suitable than the direct feature matching methods. We have introduced image quadtree partition technique to the Hausdorff distance matching, that dramatically reduces the size of the search space. Image registration of real world visible/IR images of battle fields is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an engineering approach toward implementing the current neuroscientific understanding of how the primate brain fuses, or integrates, 'information' in the decision-making process. We describe a System of Systems (SoS) design for improving the overall performance, capabilities, operational robustness, and user confidence in Identification (ID) systems and show how it could be applied to biometrics security. We use the Physio-associative temporal sensor integration algorithm (PATSIA) which is motivated by observed functions and interactions of the thalamus, hippocampus, and cortical structures in the brain. PATSIA utilizes signal theory mathematics to model how the human efficiently perceives and uses information from the environment. The hybrid architecture implements a possible SoS-level description of the Joint Directors of US Laboratories for Fusion Working Group's functional description involving 5 levels of fusion and their associated definitions. This SoS architecture propose dynamic sensor and knowledge-source integration by implementing multiple Emergent Processing Loops for predicting, feature extracting, matching, and Searching both static and dynamic database like MSTAR's PEMS loops. Biologically, this effort demonstrates these objectives by modeling similar processes from the eyes, ears, and somatosensory channels, through the thalamus, and to the cortices as appropriate while using the hippocampus for short-term memory search and storage as necessary. The particular approach demonstrated incorporates commercially available speaker verification and face recognition software and hardware to collect data and extract features to the PATSIA. The PATSIA maximizes the confidence levels for target identification or verification in dynamic situations using a belief filter. The proof of concept described here is easily adaptable and scaleable to other military and nonmilitary sensor fusion applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common approach to fusing measurements of differing dimensions is to employ a sequential Kalman filter. The sequential filter is particularly appropriate in the case of asynchronous measurement; however, it is likely to diverge should any one of the sensors degrade. As an alternative, this paper present a partially decentralized or augmented track architecture characterized by multiple local tracks.
In order to fuse the information provuded by a pulse Doppler radar (range and angle) with that of an infrared imager (angle only), the latter 'incomplete' measurement is augmented by assigning it to a pseudo-range term prior to track-level fusion. This configuration offers a potential advantage in terms of robustness, in that the local radar track cannot be contaminated if the imager's measurements degrade.
A simulation of the proposed architecture is included, and the results are compared to those achieved with a sequential filter for the same scenarios. A variety of data rates are used. In the general case, the partially decentralized filter requires additional time alignment logic to prepare the separate radar and imager tracks for the fusion process.
The ultimate goal of this investigation is to determine if the partially decentralized filter is a suitable alternative to the sequential filter, and to investigate a possible application to the Canadian Forces' Light Armored Vehicle Reconnaissance Variant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper present the concept and initial test from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics an autonomous system are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an 'intelligent' processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developments in the Data Fusion domain force the usage of a software architecture which provides reliable, modifiable, platform and operating system independent, distributed, and non-proprietary program development environments to test and realize the actual fusion system. With such an architecture, domain specific 'fusion' problems can be the focus of attention and new approaches can be applied easily to the existing systems. A preliminary evaluation of a software architecture based on Common Object Request Broker Architecture for Command and Control Systems has been presented from both performance and architectural point of view. Although CORBA and its implementations offer a platform, independent of operating systems with a non- proprietary software architecture for large scale system, there are still some important missing issues such as reliability, fault-tolerance and real-time/fast enough QoS behavior of the system. In this study, we present our experience and efforts in designing and implementing an infrastructure, and based on this infrastructure, a software architecture for data fusion and command and control systems, which provides a reliable, fault-tolerant, distributed and scalable programming environment by utilizing currently available Commercial of the Shelf products as much as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a general method of software design for multisensor data fusion is discussed in detail, which adopts object-oriented technology under UNIX operation system. The software for multisensor data fusion is divided into six functional modules: data collection, database management, GIS, target display and alarming data simulation etc. Furthermore, the primary function, the components and some realization methods of each modular is given. The interfaces among these functional modular relations are discussed. The data exchange among each functional modular is performed by interprocess communication IPC, including message queue, semaphore and shared memory. Thus, each functional modular is executed independently, which reduces the dependence among functional modules and helps software programing and testing. This software for multisensor data fusion is designed as hierarchical structure by the inheritance character of classes. Each functional modular is abstracted and encapsulated through class structure, which avoids software redundancy and enhances readability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most data association routines in implementation are based on a statistical measure that usually require that the track and measurement statistics are Gaussian. This Gaussian assumption has worked exceeding well for a number of years. Even when the Gaussian assumption was violated by the introduction of nonlinearities between the measurement and the track spaces, these techniques still performed well.
Recently, many data fusion algorithms have attemptd to incorporate environmental, terrain, and other sources of information. This new information can result in the severe loss of the Gaussian distribution. To overcome this problem, we have developed a fuzzy-logic based technique to perform association.
This association technique is based on a linguistic interpretation of the chi-squared metric. We use the inputs of the covariances and the residuals. This information is then processed using fuzzy memberships and inference engines and provides a probability score that a particular measurement associates with a given track. The two key elements of the routine are the 'normalization' of the residuals and the removals of covariance. First, we use the covariance information to define the parameters that describe the residual's membership functions. For example, if both covariances are large, the concept of small residual is much greater in absolute size than the case when both covariances are very small. Second, we incorporate the concept of the area of probability overlap between the two covariances. We can then remove portions of the area based on rules due to sensor blockage and incompatible terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Humans exhibit remarkable abilities to estimate, filter, predict, and fuse information in target tracking tasks, To improve track quality, we extend previous tracking approaches by investigating human cognitive-level fusion for constraining the set of plausible targets where the number of targets is not known a priori. The target track algorithm predicts a belief in the position and pose for a set of targets and an automatic target recognition algorithm uses the pose estimate to calculate an accumulated target-belief classification confidence measure. The human integrates the target track information and classification confidence measures to determine the number and identification of targets. This paper implements the cognitive belief filtering approach for sensor fusion and resolves target identity through a set-theory approach by determining a plausible set of targets being tracked.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new tracking filter capable of soft switching between two kinematic target models without requiring any a prior knowledge of the target state's transition probability matrix. The target models used are both constant velocity models, one with a low state process noise and one with a high state process noise. Simulations are performed to show the soft switching capability of the new filter as well as its performance. The newly derived filter significantly outperforms a well-known variable dimension filter. The result of this paper constitute a first step toward designing a new class of filters that are capable of soft switching between different target kinematic models without requiring a priori knowledge of the target state's transition likelihoods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many multi-sensor target tracking systems are developed under the assumptions that data association is too complex and computational requirement is too excessive for centralized fusion approaches to be practical. In addition, it is also assumed that the noise component is relatively small, that there are no missed detection and that the scanning interval is relatively short, etc. Many multi- sensor tracking systems have been shown to be able to perform effectively when tested with simulated data generated under these assumptions. However, careful investigation into the characteristics of several sets of real data reveals that these assumptions cannot always be made validly. In this paper we first describe the characteristics of a real multisensor tracking environment and explain why existing system may not be able to perform their task effectively in such environment. We then present a data fusion technique that can overcome some of the weakness of these systems. This technique consists of three steps: (1) estimation of synchronization error using an adaptive leaning approach; (2) adjustment of measured positions of a target in case of missed detection; and (3) prediction of the next target position using a fuzzy logic based algorithm. For performance evaluation, we tested the technique using different sets of real and simulated data. The results obtained are very satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new track-to-track association mixing kinematics data from the radar and identification data from the ESM sensor. In classical track-to-track association methods, only kinematics data from the radar are used. In this paper, we show how to improve the association using both kinds of information although they have different types. We also introduce a track identification algorithm in order to improve the performances of the method. Considering tow tracks, the problem is formulated as the following hypothesis test: H0: both observed tracks are generated by the same target; H1 both observed tracks are generated by different targets. Then we compute a likelihood ratio mixing kinematics and identification data. The identification algorithm result are used to calculate the likelihood ratio. We compare it to a threshold. This technique enables to evaluate the performance of the algorithm in terms of 'probability of correct association' and 'probability of false association'. The threshold is chosen in order to constrain the probability of false association to a small value. This method, valid for any kind of track, can easily be generalized if the number of tracks is greater than two. It has the double advantage of providing information about the common origin of the tracks and an identification of each track.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses some problems in evaluating the performance of multi-target tracking (MTT) systems. Various performance measures for the MTT systems are first described. These include: correlation statistics; track purity; track maintenance statistics; and kinematic statistics. Examples of single measures of performance are also given. The issues involved in the analytical prediction of performance are briefly discussed. Detailed descriptions of the computer simulation evaluation for the MTT systems include: test scenario selection, sensor modeling, data collection and the analysis of results. Two performam evaluation methods, namely: a two step method and a track classification approach are explored in this paper. The performam evaluation techniques are being incorporated in a MTT testbed developed in the Department of Electrical and Computer Engineering at the Royal Military College of Canada, Kingston, Ontario, Canada.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In both military and civilian applications, increasing interest is being shown in fusing IR and vision images for improved situational awareness. In previous work, the authors have developed a fusion method for combining the thermal and vision images into a single image emphasizing the most salient features of the surrounding environment. This approach is based on the assumption that although the thermal and vision data are uncorrelated, they are complementary and can be fused using a suitable disjunctive function. This paper, as a continuation of that work, will describe the development of an information based real-time data level fusion method. In addition, applicability of the algorithms that we developed for data level fusion to feature level techniques will be investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work described in this paper focuses on cross band pixel selection as applied to pixel level multi-resolution image fusion. In addition, multi-resolution analysis and synthesis is realized via QMF sub-band decomposition techniques. Thus cross-band pixel selection is considered with the aim of reducing the contrast and structural distortion image artifacts produced by existing wavelet based, pixel level, image fusion schemes. Preliminary subjective image fusion results demonstrate clearly the advantage which the proposed cross-band selection technique offers, when compared to conventional area based pixel selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Availability of different imaging modalities requires techniques to process and combine information form different images of the same phenomena. We present a symmetry based approach for combining information from multiple images. Fusion is performed at data level. Actual object boundaries and shape descriptors are recovered directly from raw sensor output(s). Method is applicable to arbitrary number of images in arbitrary dimension.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes two practical fusion techniques for automatic target cueing that combine features derived from each sensor data ta the object-level. In the hybrid fusion method each of the input sensor data is prescreened before the fusion stage. The cued fusion method assumes that one of the sensors is designated as a primary sensor, and thus ATC is only applied to its input data. If one of the sensors exhibits a higher Pd and/or a lower false alarm rate, it can be selected as the primary sensor. However, if the ground coverage can be segmented to regions in which one of the sensors is known to exhibit better performance, then the cued fusion can be applied locally/adaptively by switching the choice of a primary sensor. Otherwise, the cued fusion is applied both ways and the outputs of each cued mode are combined. Both fusion approaches use a back-end discrimination stage that is applied to a combined feature vector to reduce false alarms. The two fusion processes were applied to spectral and radar sensor data nd were shown to provide substantial false alarm reduction. The approaches are easily extendable to more than two sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms: Fusion of Image-derived Information II
We propose an unbiased multifeature fusion Pulse Coupled Neural Network (PCNN) algorithm. The method shares linking between several PCNNs running in parallel. We illustrate the PCNN fusion technique with a clean and noisy three-band color image example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a preliminary approach to the fusion of multi-spectral image data for the analysis of cervical cancer. The long-term goal of this research is to define spectral signatures and automatically detect cancer cell structures. The approach combines a multi-spectral microscope with an image analysis tool suite, MathWeb. The tool suite incorporates a concurrent Principal Component Transform (PCT) that is used to fuse the multi-spectral data. This paper describes the general approach and the concurrent PCT algorithm. The algorithm is evaluated from both the perspective of image quality and performance scalability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facing the increasing availability of remote sensing imagery, the compression of information and the combining of multi-spectral and multi-sensor image data are becoming of greater importance. This paper presents a new image fusion scheme based on PCA and multi-resolution analysis of wavelet theory for fusing high-resolution panchromatic and multi- spectral images. It is done in two ways: a) By replacing some wavelet coefficients of k-principal components by the corresponding coefficients of the high-resolution panchromatic images; b) By adding the wavelet coefficients of the high-resolution panchromatic image directly to k- principal components. The proposal approach is used to fuse the SPOT panchromatic and Landsat multi-spectral imags. Experimental result demonstrate that the proposal approach can not only preserve all the spectral characteristics of the multi-spectral images, but can also improve their definition and spatial quality. Compared with the PCA fusion method, the proposal scheme is much better and possesses more capable of adaptability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a contrast-based monochromatic fusion process. The fusion process is aimed for on board real time the information content in the combined image, while retaining visual clues that are essential for navigation/piloting tasks. The method is a multi scale fusion process that provides a combination of pixel selection from a single image and a weighing of the two/multiple images. The spectral region is divided into spatial sub bands of different scales and orientations, and within each scale a combination rule for the corresponding pixels taken from the two components is applied. Even when the combination rule is a binary selection the combined fused image may have a combination of pixel values taken from the two components at various scales since it is taken at each scale. The visual band input is given preference in low scale, large features fusion. This fusion process provides a fused image better tuned to the natural and intuitive human perception. This is necessary for pilotage and navigation under stressful conditions, while maintaining or enhancing the targeting detection and recognition performance of proven display fusion methodologies. The fusion concept was demonstrated against imagery from image intensifiers and forward looking IR sensors currently used by the US Navy for navigation and targeting. The approach is easily extendible to more than two bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multiple sensor approach to tracking mobile human targets. The goal of this research is to have a video camera automatically monitor a moving human subject in an environment that may contain multiple subjects and clutter. Real-time range data, obtained from arrays of acoustic sensor, are input to a hidden Markov model (HMM) and are processed in order to predict target location. The problem amounts to one of solving for and maximizing P(O/λ), which is the probability of obtaining an observation sequence O, given a HMM λ. First, the probability is calculated using the forward-backward recursive algorithm. Second, the parameters of the HMM are optimized using Baum-Welch iteration to maximize P(O/λ). The maximization procedure ceases when an acceptable tolerance, consistent with obtaining accurate prediction probabilities, is reached. Target track is extracted from the model using the Viterbi algorithm. The hidden Markov models were formulated analytically and were initially trained and tested using synthetic data. Results obtained for single human targets moving at random in a large room yield a close correlation between the HMM output and the actual target tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of identity fusion for a multi- sensor target tracking system whereby sensors generate reports on the target identities. Since the sensor reports are typically fuzzy, 'incomplete' and inconsistent, the fusion approach based on the minimization of inconsistencies between the sensor reports by using a convex Quadratic Programming (QP) and linear programming (LP) formulation. In contrast to the Dempster-Shafer's evidential reasoning approach which suffers from exponentially growing completely, our approach is highly efficient. Moreover, our approach is capable of fusing 'ratio type' sensor reports, thus it is more general than the evidential reasoning theory. When the sensor reports are consistent, the solution generated by the new fusion method can be shown to converge to the true probability distribution. Simulation work shows that our method generates reasonable fusion results, and when only 'Subset type' sensor reports are presented, it produces fusion results similar to that obtained via the evidential reasoning theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The prodigious amount of information provided by surveillance system and other information sources has created unprecedented opportunities for achieving situation awareness. Because the mission's user's needs are constantly evolving, fusion control strategies must adapt to these changing requirements. However, the optimal control problem with the desired adaptive control capabilities is enormously complex. Therefore, we solve the adaptive fusion control problem approximately using a methodology called Neuro- Dynamic Programming (NDP) that combines elements of dynamic programming, simulation-based reinforcement learning, and statistical inference techniques. This work demonstrates the promise of using NDP for adaptive fusion control by sign it to allocate computational resources to Bayesian belief networks that use a variety of data types to track and identify clusters of vehicles. We have significantly extended previous work by using NDP to adapt the fusion process itself in addition to deciding which clusters should get their inference updated. Fusion within the Bayesian networks was adapted by using NDP to select the subset of available data to be used when updating the inference. We also extended previous work by using a dynamic scenario with moving vehicles for training and testing models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a realistic neural network - the canonical cortical module - built on basic principles of cortical organization. These principles are: opponent cells principle, canonical cortical circuit principle and modular principle. When applied to visual images, the network explains orientational and spatial frequency filtering functions of neurons in the striate cortex. Two patterns of joint distribution of opponent cells in the inhibitory cortical layer are presented: pinwheel and circular. These two patterns provide two Gestalt descriptions of local visual image: circle-ness and cross-ness. These modules were shown to have a power for shape detection and texture discrimination. they also provide an enhancement of signal- to-noise ratio of input images. Being modality independent, the canonical cortical module seems to be a good tool for bio-fusion for intelligent system control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-Sensor Fusion Management (MSFM) algorithm positions multiple, detection-only, passive sensors in a 2D plane to optimize the fused probability of detection using a simple decision fusion method. Previously the MSFM algorithm was evaluated on two synthetic problem domains comprising of both static and moving targets.
In the original formulation the probability distribution of the target location was modelled using a non-parametric approach. The logarithm of the fused detection probability was used as a criterion function for the optimisation of the sensor positions. This optimisation used a straightforward gradient ascent approach, which occasionally found local optima. Following the placement optimisation the sensors were deployed and the individual sensor detections combined using a logical OR fusion rule. The target location distribution could then be updated using the method of sampling, importance re-sampling (SIR).
In the current work the algorithm is extended to admit a richer variety of behaviour. More realistic sensor characteristic models are used which include detection-plus-bearing sensors and false alarm probabilities commensurate with actual sonar sensor systems. In this paper the performance of the updated MSFM algorithm is illustrated on a realistic anti-submarine warfare (ASW) application in which the placement of the sensors is carried out incrementally, allowing for the optimisation of both the location and the number of sensors to be deployed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the classification properties and classification mechanisms of outer-supervised feed-forward neural network classifiers (FNNC). It is shown that nonlinear FNNCs can break through the 'bottleneck' behaviors for linear FNNCs. Assume that the involved FNNCs are classifiers that associate only one output node with each class, after the global minimum solutions with null costs based on batch-style learning are obtained, it is shown that in the case of the linear output network classifiers, the class weight vectors corresponding to different output nodes are orthogonal, and in the case of sigmoid output activation functions, the jth class weight vector must be situated in the negative direction of the i(i does not equal j) th class weight vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.