PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes the design and implementation of multiple model nonlinear filters (MMNLF) for ground target tracking using Ground Moving Target Indicator (GMTI) radar measurements. The MMNLF is based on a general theory of hybrid continuous-discrete dynamics. The motion model state is discrete and its stochastic dynamics are a continuous- time Markov chain. For each motion model, the continuum dynamics are a continuous-state Markov process described here by appropriate Fokker-Plank equations. This is illustrated here by a specific two-model MMNLF in which one motion model incorporates terrain, road, and vehicle motion constraints derived from battlefield observations. The second model is slow diffusion in speed and heading. The target state conditional probability density is discretized on a moving grid and recursively updated with sensor measurements via Bayes' formula. The conditional density is time updated between sensor measurements using Alternating Direction Implicit (ADI) finite difference methods. In simulation testing against low signal to clutter + noise Ratio (SNCR) targets, the MMNLF is able to maintain track in situations where single model filters based on either of the component models fail. Potential applications of this work include detection and tracking of foliage-obscured moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Interactive Multiple Model (IMM) algorithm is a well known multiple model technique for tracking maneuvering targets. In its design, the IMM assumes that the target transition likelihood is known a priori. However, such an assumption is violated in practice. The objective of this paper is to design a multiple model tracker without assuming any a priori knowledge about the characteristics of the target motion. The performance of the newly developed filter will be compared to an IMM tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking energy on an intensity-modulated sensor output typically requires windowing, thresholding, and/or interpolation to arrive at point measurements to feed the tracking algorithm. Conventional trackers are point trackers, and point measurement estimation procedures pose problems for tracking signal energy that is distributed across many sensor cells. Such signals are sometimes termed over-resolved. Large arrays provide greater resolution with the potential for improved detection and classification performance, but higher resolution is in direct conflict with tracking over-resolved signals. The Histogram-Probabilistic Multi-Hypothesis Tracker (H-PMHT) algorithm addresses these issues and provides a means for modeling and tracking signals that are spread across several contiguous sensor cells. H-PMHT models the cell responses as a received energy histogram, and the probability density underlying this histogram is modeled by a mixture density. Elements of the H-PMHT signal model, theory, and algorithm are presented for linear Gauss-Markov targets. Tracking examples using simulated azimuth beam data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, Naval Combat Systems have to deal with different kinds of fast moving incoming missiles including weaving sea-skimmers and BUNT missiles. Their high maneuver capabilities broadly lead to tracking locks-off. The article addresses this situation and describes an angle- only Multiple Models KALMAN Filter (MMKF) devoted to self- defense against head-on maneuvering missiles. The application is tracking achieved by search passive sensors such as Infrared Search and Track sensors (IRST). Then, and example of implementation of the MMKF in a Sensor Suite Processing is provided. The case of a Full Silent Search Platform including an IRST and an ESM is studied with different incoming missile scenarios (Monte-Carlo analysis).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple-sensor data fusion is becoming increasingly important among the defense community as technology evolves. The use of multiple-sensor information reduces the ambiguity and presents the operator with an enhanced tactical picture of the surveillance volume. The crucial step in the fusion process is the data association of the information received from the sensors. If the associations are made incorrectly then the fused data could potentially give rise to estimates that might be worse than those of a single sensor. In this paper, we explore the problem of associating Electronic Support Measure (ESM) tracks with one or more possible radar tracks. We examine the performance of different track-to-track dat association algorithms through simulations, which is based on the Probability of False Association (Pfa) and Probability of Correct (Pc) association.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the real world engineering problems are imprecise and they carry a certain degree of fuzziness in the description of their nature. Fuzzy logic is a design methodology that can be used to solve real life problems. It has the advantage of lower development costs, superior features, and better end product performance. Fuzzy logic makes it possible to describe complex systems using expert experience and knowledge in English-like rules, which are easy to learn and use, even by non-experts. Fuzzy technique does not require system modeling or complex mathematical equations. The design methodology is to first understand and characterize the system behavior by using our basic knowledge and experience and then design the algorithm using the fuzzy rules that describe the relationship between its input and output. This is done by debugging the design through simulations and if the performance is not satisfactory we only need to modify or add some fuzzy rules. There exists considerable literature on target tracking based on the Kalman filtering and probabilistic data association (PDA) techniques. A few of these techniques can yield acceptable results in a high-density clutter environment due to the complexity of combined target and measurement to track association or due to the simplification assumed in these techniques. This paper presents the use of fuzzy association rules involved in data association of target measurements under a high-density clutter. The fuzzy tracker is used to track a target and its performance is compared with a standard PDA filter for various signal-to-noise ratios (SNR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe ongoing work in applying Finite Set Statistics (FISST) techniques to a Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) problem. It summarizes recent results in an ongoing project in which we are applying FISST filtering approaches to the problem of identifying ground targets from Synthetic Aperture Radar. The signatures for these targets are ambiguous because of extended operating conditions, that is the images have uncharacterizeable noise introduced in the form of mud, dents, etc. We propose a number of mechanisms for compensating for this noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of ATR algorithms have promise improved performance, not yet realized operationally. Typically, good results have been reported on data sets of limited size that have been tested in a laboratory environment, only to see the performance degrade when stressed with real-world target and environmental variability. To investigate exact signature reproduction requirements along with target and environment variability issues for stressing new ATR metrics, the U.S. Army's National Ground Intelligence Center (NGIC) and Targets Management Office (TMO) originated, sponsored, and directed a signature project plan to acquire multiple target full-polarimetric Ka-band radar signature data at Eglin AFB, as well as its submillimeter-wave compact radar range equivalent using high-fidelity exact 1/16th scale replicas fabricated by the ERADS program. To effectively understand signature reproduction requirements through the variability of multiple target RCS characteristics, TMO and NGIC sponsored researchers at U Mass Lowell's Submillimeter-Wave Technology Laboratory (STL) and Simulation Technologies (SimTech) to analyze the intra- class and inter-class variability of the full scale Ka-band turntable signature data. NGIC, TMO, STL and SimTech researchers then traveled to the location of the vehicles measured at Eglin AFB and conducted extensive documentation and mensuration on these vehicles. Using this information, ERADS built high fidelity, articulatable exact replicas for measurement in the NGIC's compact radar ranges. Signal processing software established by STL researchers in an NGIC directed signature study was used to execute an HRR and ISAR cross-correlation study of the field and scale-model signature data. The signature to signature variability quantified is presented, along with a description and examples of the signature analysis techniques exploited. This signature data is available from NGIC on request for Government Agencies and Government Contractors with an established need-to-know.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an algorithm for SAR and FLIR image registration with the goal of facilitating the matching or association of potential targets detected from both images. Our algorithm is a feature point based approach, therefore the preprocessing is relatively simple. We use available sensor truth information to establish an initial registration, which we argue is a good approximation to the actual one up to a linear translation caused by errors in two parameters in the sensor truth. A generalized Hough transform, or GHT, in 2-D is then used to find the residual translational registration error by finding a maximal set of matching feature points from the SAR and FLIR. The registration transformation is then updated using the set of matching feature points found by GHT. Due to the robustness of GHT to noise in the form of uncorrelated clutter feature points, our algorithm can tolerate a significant level of false alarms in the feature detection process, thereby allowing for simple detection processes to be used and still achieve very high overall target detection rate after the registration process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Land mine detection using metal detector (MD) and ground penetrating radar (GPR) sensors in hand-held units is a difficult problem. Detection difficulties arise due to: 1) the varying composition and type of metal in land mines, 2) the time-varying nature of background and 3) the variation in height and velocity of the hand-held unit in data measurement. This research introduces new spatially distributed MD features for differentiating land mine signatures from background. The spatially distributed features involve correlating sequences of MD energy values with six weighted density distribution functions. These features are evaluated using a standard back propagation neural network on real data sets containing more than 2,300 mine encounters of different size, shape, content and metal composition that are measured under different soil conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing SAR sensors is an extremely complex process. Thereby it is very important to keep in mind the goal for which the SAR sensor has to be built. For military purpose the detection and recognition of vehicles is essential. To give recommendations for design and use of SAR sensors we carried out interpreter experiments. To assess the interpreter performance we measured performance parameters like detection rate, false alarm rate etc. The following topics were of interest: How do the SAR sensor parameters bandwidth and incidence angle influence the interpreter performance? Could the length, width and orientation of vehicles be measured in SAR-images? Which information (size, signature...) Will be used by the interpreters for vehicle recognition? Using our SaLVe evaluation testbed we prepared lots of images from the experimental SAR-system DOSAR (EADS Dornier) and defined several military interpretation tasks for the trials. In a 4 weeks experiment 30 German military photo interpreters had to detect and classify tanks and trucks in X-Band images with different resolutions. To accustom the interpreters to SAR image interpretation they carried out a computer based SAR tutorial. To complete the investigations also subjective assessment of image quality was done by the interpreters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of the detection and classification of buried objects in shallow and very shallow water. A new 3D high resolution sonar technique is presented. A specific suspension and tracking sonar frame has been designed with the constraint to be close of a future real operational configuration. This equipment has been deployed first in a pool and then at sea; it was mounted 2 meters above rough sand sea floor where objects and rocks were buried. The detection and classification of objects have been successfully demonstrated up to half a meter sediment burial. This paper presents these preliminary results and shows the technique interest for mine hunting operations, buried cable inspection, archeology researches etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invited Session: Multisensor Fusion Methodologies and Applications I
The theoretically optimal approach to multitarget detection, tracking, and identification is a suitable generalization of the recursive Bayes nonlinear filter. This approach will never be of practical interest without the development of drastic but principled approximation strategies. In single- target problems, the computationally fastest approximate filtering approach is the constant-gain Kalman filter. This filter propagates a first-order statistical moment of the single-target system (the posterior expectation) in the place of the posterior distribution. This paper describes an analogous strategy: propagation of a first-order statistical moment of the multitarget system. This moment, the probability hypothesis density (PHD), is the density function on single-target state space that is uniquely defined by the following property: its integral in any region of states space is the expected number of targets in that region. We describe recursive Bayes filter equations for the PHD that account for multiple sensors, missed detections and false alarms, and appearance and disappearance of targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work presented here is a continuation of research first reported in Mahler, et. Al. Our goal is a generalization of Bayesian filtering and estimation theory to the problem of multisensor, multitarget, multi-evidence unified joint detection, tracking and target identification. Our earlier efforts were focused on integrating the Statistical Features algorithm with a Bayesian nonlinear filter, allowing simultaneous determination of target position, velocity, pose and type via maximum a posteriori estimation. In this paper we continue to address the problem of target classification based on high range resolution radar signatures. While we continue to consider feature based techniques, as in StaF and our earlier work, instead of considering the location and magnitude of peaks in a signature as our features, we consider three alternative features. The features arise from applying either a Wavelet Decomposition, Principal Component Analysis or Linear Discriminant Analysis to the signature. We discuss briefly also, in the wavelet decomposition setting, the challenge of assigning a measure of uncertainty with a classification decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theoretically optimal approach to multitarget detection, tracking, and identification is a suitable generalization of the recursive Bayes nonlinear filter. However, this optimal filter is so computationally challenging that it must usually be approximated. We report on a novel approximation of a multi-target non-linear filtering based on the spectral compression (SPECC) non-linear filter implementation of Stein-Winter probability hypothesis densities (PHDs). In its current implementation, SPECC is a two-dimensional, four-state, FFT-based filter that is Bayes-Closed. It replaces a log-posterior or log-likelihood with an approximate log-posterior or log-likelihood, that is a truncation of a Fourier basis. This approximation is based on the minimization of the least-squares error of the log-densities. The ultimate operational utility of our approach depends on its computational efficiency and robustness when compared with similar approaches. Another novel aspect of the proposed algorithm is the propagation of a first-order statistical moment of the multitarget system. This moment, the probability hypothesis density (PHD) is a density function on single-target state space which is uniquely defined by the following property: its integral in any region of state space is the expected number of targets in that region. It is the expected value of the point process of the random track set (i.e., the density function whose integral in any region of state space is the actual number of targets in the region). The adequacy, and the accuracy of the algorithm when applied to simulated and real scenarios involving ground targets are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A group target is a collection of individual targets that are part of some larger military formation such as a brigade, tank column, aircraft carrier group, etc. Unlike conventional targets, group targets are fuzzy in the sense that it is not possible to precisely define their identities in actual battlefield situations. It is also not necessarily possible to detect (let alone track or identify) each and every platform in a given group. Force aggregation (also known as situation assessment or Level 2 data fusion) is the process of detecting, tracking, and identifying group targets. A suitable generalization of the Bayes recursive filter is the theoretically optimal basis for detection, tracking, and identification of multiple targets using multiple sensors. However, it is not obvious what filtering even means in the context of group targets. In this paper we present a theoretically unified, rigorous, and potentially practical approach to force aggregation. Using finite-set statistics (FISST), I show how to construct a theoretically optimal recursive Bayes filter for the multisensor-multigroup problem. Potential computational tractability is achieved by generalizing the concept of a probability hypothesis density (PHD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is supposed that there is a multisensor system in which each sensor performs sequential detection of a target. Then the binary decisions on target presence and absence are transmitted to a fusion center, which combines them to improve the performance of the system. We assume that sensors represent multichannel systems with possibly each one having different number of channels. Sequential detection of a target in each sensor is done by implementing a generalized Wald's sequential probability ratio test which is based on the maximum likelihood ratio statistic and allows one to fix the false alarm rate and the mis-detection rate at specified levels. We first show that this sequential detection procedure is asymptotically optimal for general statistical models in the sense of minimizing the expected sample size when the probabilities of errors vanish. We then construct the optimal non-sequential fusion rule that waits until all the local decisions in all sensors are made and then fuses them. It is optimal in the sense of maximizing the probability of target detection for a fixed probability of a false alarm or minimizing the maximal probability of error (minimax criterion). An analysis shows that the final decision can be made substantially more reliable even for a small number of sensors (3-5). The performance of the system is illustrated by the example of detecting a deterministic signal in correlated (color) Gaussian noise. In this example, we provide both the results of theoretical analysis and the results of Monte Carlo experiment. These results allow us to conclude that the use of the sequential detection algorithm substantially reduces the required resources of the system compared to the best non-sequential algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Filtering is a method of estimating the conditional probability distribution of a signal based upon a noisy, partial, corrupted sequence of observations of the signal. Particle filters are a method of filtering in which the conditional distribution of the signal state is approximated by the empirical measure of a large collection of particles, each evolving in the same probabilistic manner as the signal itself. In filtering, it is often assumed that we have a fixed model for the signal process. In this paper, we allow unknown parameters to appear in the signal model, and present an algorithm to estimate simultaneously both the parameters and the conditional distribution for the signal state using particle filters. This method is applicable to general nonlinear discrete-time stochastic systems and can be used with various types of particle filters. It is believed to produce asymptotically optimal estimates of the state and the true parameter values, provided reasonable initial parameter estimates are given and further estimates are constrained to be in the vicinity of the true parameters. We demonstrate this method in the context of search and rescue problem using two different particle filters and compare the effectiveness of the two filters to each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle-based nonlinear filters provide a mathematically optimal (in the limit) and sound method for solving a number of difficult filtering problems. However, there are a number of practical difficulties that can occur when applying particle-based filtering techniques to real world problems. These problems include highly directed signal dynamics highly definitive observations clipped observation data. Current approaches to solving these problems generally require increasing the number of particles, but to obtain a given level of performance the number of particles required may be extremely large. We propose a number of techniques to ameliorate these difficulties. We adopt the ideas of simulated annealing and add noise which is damped in time to the particle states when they are evolved or duplicated, and also add noise which is damped in time to the interpretation of the observations by the filter, to deal with signal dynamics and observation problems. We modify the method by which particles are duplicated to deal with different information flows into the system depending on the location of the particle and the information flow into the particle. We discuss the success we have had with these solutions on some of the problems of interest to Lockheed Martin and the MITACS-PINTS research center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem described in this paper deals with tracking the optical path perturbations introduced by the atmosphere when illuminating a target (missile) with laser light. Due to atmospheric irregularities, the optical path from an observer to an in-flight missile deviates from a straight line, and also changes in time. If the goal of the system is to point a laser beam at a specific point (or area) of the missile body for a given period of time, these optical path variations should be tracked and compensated when pointing the laser beam. The laser beam should be pointed, not to the true but to the apparent location of the desired spot. In the actual system, the missile is illuminated with several lasers (forming a broad beam), and an image of the missile (distorted through the atmosphere) is obtained from the backscattered light. This image contains all the information available about the optical path. The purpose of the work presented here is to estimate the apparent location of five different spots of the missile (distributed evenly along the longitudinal axis, from the nose up to mid-body) from the backscattered images and a- priori information that includes the size and speed of the missile. The dta available is high-fidelity simulated data, and the apparent locations of the desired spots (over time) are known. Two approaches are considered here. The first approach is based on breaking the problem into two parts: a measurement part and an estimation part. For the measurement part, a Neural Network is used to infer a mapping from the image to the apparent location of the points of interest (known for the simulated data). Those measured locations are then used by a Kalman filter to estimate the apparent locations. The Kalman filter exploits the fact that the optical paths (from different spots along the longitudinal axis) are correlated in time. This correlation is caused by the missile's displacement through the atmosphere. The second approach is to compute the centroids of the images and use the resulting points as estimates of the apparent location of a point on the missile. For the first approach, simulation results show a noticeable decrease in the rms error of the apparent location estimates when compared to the average location (mean value). The second approach, while simple, was found to perform quite well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conditional probability logics (CPL's), such as Adams', while producing many satisfactory results, do not agree with commonsense reasoning for a number of key entailment schemes, including transitivity and contraposition. Also, CPL's and bayesian techniques, often: (1) use restrictive independence/simplification assumptions; (2) lack a rationale behind choice of prior distribution; (3) require highly complex implementation calculations; (4) introduce ad hoc techniques. To address the above difficulties, a new CPL is being developed: CRANOF - Complexity Reducing Algorithm for Near Optimal Fusion -based upon three factors: (i) second order probability logic (SOPL), i.e., probability of probabilities within a bayesian framework; (ii) justified use of Dirichlet family priors, based on an extension of Lukacs' characterization theorem; and (iii) replacement of the theoretical optimal solution by a near optimal one where the complexity of computations is reduced significantly. A fundamental application of CRANOF to correlation and tracking is provided here through a generic example in a form similar to transitivity: two track histories are to be merged or left alone, based upon observed kinematic and non-kinematic attribute information and conditional probabilities connecting the observed data to the degrees of matching of attributes, as well as relating the matching of prescribed groups of attributes from each track history to the correlation level between the histories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When trying to determine the identity of individual target classes a model database is often used which categorizes the known individuals. The models contain attributes which characterize each individual such as color, propulsion type, shape, number of wheels, etc. Based on sensed characteristics. Often new targets are introduced which are not modeled in the database but of interest to the observer. Historical classification systems will attempt to map measurements of new targets into the known model base as best as possible. This paper presents an approach for automatically generating a none-of-the-above (NOTA) class to allow for the detection of new classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been noted recently that, in a number of applications, effective approximations to complex posterior probabilities can be computed through the framework of probability propagation in Bayesian networks. In this paper, we develop a Bayesian network for the problem of target detection and recognition. Our multiply-connected Bayesian network is based on a distribution decomposition of the form p(y,t,e)=p(y|t,e)p(t|e)p(e), where y is an observed image, t is a set of target pixels together with identifying labels, and e is a set of edge pixels. Running a probability propagation algorithm on this network leads to an approximation of the desired posterior probability p(t|y) as a product of terms that correlate the conditional observation distribution p(y|t,e) and target distribution p(t|e) with a posterior edge distribution p(e|y). We describe approaches for generating the required posterior edge distribution and for calculating the correlations through template matching. The result is a computationally-efficient algorithm for computing posterior target probabilities that can be used either to generate hard decisions or for fusion with other information. Target detection based on the posterior probability p(t|y) is discussed in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the past two years at this conference we have described results in the practical implementation of a unified, scientific approach to performance measurement for data fusion algorithms. Our approach is based on finite set statistics (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems in such a way that information can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what information means. In this follow-on paper we describe the performance of FISST based metrics that take into account a user's definition of information and develop a rigorous theory of partial information for multisource, multi-target problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern naval battleforces generally include many different platforms each with onboard sensors such as radar, ESM, and communications. The sharing of information measured by local sensors via communication links across the battlegroup should allow for optimal or near optimal decisions. A fuzzy logic expert system has been developed that automatically allocates electronic attack (EA) resources in real-time. Genetic algorithm based optimization is conducted to determine the form of the membership functions for the fuzzy root concepts. The resource manager is made up of four trees, the isolated platform tree, the multi-platform tree, the fuzzy parameter selection tree and the fuzzy strategy tree. The isolated platform tree provides a fuzzy decision tree that allows an individual platform to respond to a threat. The multi-platform tree allows a group of platforms to respond to a threat in a collaborative fashion. The fuzzy parameter selection tree is designed to make optimal selections of root concept parameters. The strategy tree is a fuzzy tree that an agent uses to try to predict the behavior of an enemy. Finally, an example is provided that shows how the resource manager uses its subtrees to deliver high quality responses for many demanding scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the last two years at this conference, we have described the implementation of a unified, scientific approach to performance measurement for data fusion algorithms based on FINITE-SET STATISTICS (FISST). FISST makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems. In previous papers we described application of information Measures of Effectiveness (MoEs) to multisource-multitarget data fusion and to non-distributed sensor management. In this follow-on paper we show how to generalize this work to DISTRIBUTED sensor management and ADAPTIVE DATA FUSION.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target observation is a problem where the application of multiple sensors can improve the probability of detection and observation of the target. Team formation is one method by which seemingly unsophisticated heterogeneous sensors may be organized to achieve a coordinated observation system. The sensors, which we shall refer to as agents, are situated in an area of interest with the goal of observing a moving target. We apply a team approach to this problem, which combines the strengths of individual agents into a cohesive entity - the team. In autonomous systems, the mechanisms that underlie the formation of a team are of interest. Teams may be formed by various mechanisms, which include an externally imposed grouping of agents, or an internally, self-organized (SO) grouping of agents. Internally motivated mechanisms are particularly challenging, but offer the benefit of being unsupervised, an important quality for groups of autonomous cooperating machines. This is the focus of our research. By studying natural systems such as colonies of ants, we obtain insight into these mechanisms of self organization. We propose that the team is an expression of a distributed agent-self, and that a particular realization of the agent-self exists, whilst the environmental conditions are conducive to that existence. We describe an algorithms for agent team formation that is inspired by the self-organizing behavior of ants, and describe simulation results for team formation amongst a lattice of networked sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will provide a description of multiple research efforts in the area of information fusion being conducted at the Fusion Technology Branch, Air Force Research Laboratory. It will describe a series of innovative approaches of traditional fusion algorithms that heuristic reasoning techniques to improve situational assessment and threat prediction. Major features of the presentation will cover Bayesian techniques, Knowledge Based approaches, Artificial Neural Systems (Neural Networks), Fuzzy Logic, and Genetic Algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unified perceptual reasoning system framework for adaptive sensor fusion and situation assessment is presented. The concept and application of perception, the resultant system architecture and its candidate renditions using knowledge- based systems and associative memory are discussed. The perceptual reasoning system is shown to be a natural governing mechanism for extracting, associating and fusing information from multiple sources while adaptively controlling the Joint Director of Laboratories (JDL) Fusion Model processes for optimum fusion system performance. The unified modular system construct is shown to provide a formal framework to accommodate various implementation alternatives. The application of this architectural concept is illustrated for representative network centric surveillance system architecture. A target identification system using Dempster-Shafer declarations level fusion is used to demonstrate the benefits of the adaptive perceptual reasoning system and the iterative evidential reasoning method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
One issue that concerns the data fusion community is whether or not fusion of sensory information is beneficial. Beneficial results from fusion can be understood from a logical argument that if two types of sensors are measuring an object and only one source is available, then fusion is beneficial. Such an argument holds in the case of person identification system with the fusion of audio and video information. If only video is available, then a system comprising of audio alone could not identify the person. We further refine the fusion benefit to assess what is the measure of fusion gain? A fusion gain system operator characteristic (FG-SOC) metric and a system reliability (SR) metric are used to define a fusion gain. A multi-source data fusion system performance modeling gain directly addresses both system performance and data sufficiency using system simulation and functional modeling methods. The FG- SOC approach models the relationships between sensor performance, revisit rate, and object density by extending current statistical tracking performance models to asynchronous sensing situations. The FG-SCO application establishes a method for the relative comparison of multiple sensor collection alternatives using a functional performance characterization and can be used to evaluate sensor fusion planning and control alternatives based on fusion system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical Bayesian method of decision fusion applies a single detection strategy that maximizes some objective function (minimizes the cost of declaring an error). The distributed detection process follows a similar approach, except that herein, a set of Boolean fusion rules are used to account for each relevant association condition. Each fusion rule applies a coupled decision statistic that adjusts detection performance thresholds for each change detector based on the combined performance of the other detectors according to overall system performance expectations (CFAR setting), localized conditions, and which combination of change detection algorithms best contributes to the fusion process (the most appropriate fusion rule). However, applying these rules to a fixed CFAR shows that based on environmental conditions, certain rules result in a much higher CFAR error than others (based on the Probability of False Alarm), even though the corresponding Probability of Detection gives high performance. Adapting the CFAR level to minimize this error for each fusion rule gives a more realistic performance criteria. The conditions under evaluation involve the use of three image change detection algorithms (two using SAR images, and one using Electro-Optical Imagery). Each change detection algorithm provides a unique observation of the environment. The Adaptive Boolean Decision Fusion process provides a basis for fusing and interpreting these change events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improving change detection performance (probability of detection/false alarm rate) is an important goal of DARPA's Dynamic Database (DDB) program. We describe a novel approach based on fusing the outputs from two complementary image-based change detection algorithms. Both use historical imagery over the region of interest to construct normalcy models for detecting change. Image level change detection (ILCD) segments the set of images into temporally co-varying pixel sets that are spatially distributed throughout the image, and uses spatial normalcy models constructed over these pixel sets to detect change in a new image. Object level change detection (OLCD) segments each image into a set of spatially compact objects, and uses temporal normalcy models constructed over objects associated over time to detect change in the new image. Because of the orthogonal manner in which ILCD and OLCD operate in space-time, false alarms tend to decorrelate. We develop signal-level statistical models to predict the performance gain (output/input signal to noise ratio) of each algorithm individually, and combined using and fusion. Experimental results using synthetic aperture radar (SAR) images are presented. Fusion gains ranging from slightly greater than unity in low clutter backgrounds (e.g., open areas) to more than 20db in complex backgrounds containing man-made objects such as vehicles and buildings have been achieved and are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human perceptual performance was tested with images of nighttime outdoor scenes. The scenes were registered both with a dual band (visual and near infrared) image intensified low-light CCD camera (DII) and with a thermal middle wavelength band (3-5 micrometers ) infrared (IR) camera. Fused imagery was produced through a pyramid image merging scheme, in combination with different color mappings. For all (individual and fused) image modalities, small patches of the scenes, displaying a range of different objects and materials, were briefly presented to human observers. The sensitivity of human observers was tested for different recognition tasks. The results show that grayscale image fusion yields improved performance levels for most perceptual tasks investigated here. When an appropriate color mapping scheme is applied, the addition of color to grayscale fused imagery significantly increases observer sensitivity for a given condition and a certain task. However, inappropriate use of color significantly decreases observer performance compared to straightforward grayscale image fusion. This suggests that color mapping should adapt to the visual task and the conditions (scene content) at hand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Force Detection and Identification System (FDIS) provides an operationally capable near-real-time platform appropriate for the evaluation of state-of-the-art imagery exploitation components and architectural design principles on scalable platforms. FDIS's architectural features include: a highly modular component design allowing rapid component interchange, multiple intercomponent datapaths which support both fine- and course-grained parallelism, an infrastructure which supports heterogeneous computing on a range of high-performance computing platforms, and conceptual decoupling between image processing and non-image processing components while supporting multi-level evidence fusion. While none of these features are individually unique, when combined they represent a state-of-the-art imagery exploitation system. FDIS has demonstrated probability of detection and false alarm rates consistent with other SAR-based exploitation systems. FDIS, however, requires fewer computing resources, supports rapid insertion of new or changed components to support emerging technologies with an ease not encountered in legacy systems, and is smoothly scalable. In addition to exploiting these novel architectural features, FDIS includes a new multi-dimensional evidence fusion component; Force Estimation (FE). Previous exploitation systems have demonstrated the positive impact on probability of detection and false alarm rates obtained by clustering vehicle detections into groups. FE, however, as a fusion component extends evidence accrual beyond simple spatial characteristics. Based on a fast multipole algorithm, FE accrues probabilistic evidence on models of military unit compositions. Fused evidence includes vehicle classifications, cultural and terrain features, and electronic emission features. FE's algorithmic speed allows operation in near-real-time without requiring excessive computational resources. FE has demonstrated improved force detection results over a wide range of operational conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy with which an object can be localized is key to the success of a targeting system. Localization is generally achieved by a single sensor, most notably Synthetic Aperture Radar (SAR) or an Infra-Red (IR) device, supported by an Inertial Navigation System (INS) and/or a Global Positioning System (GPS). Future targeting systems are expected to contain (or to have access to data from) multiple sensors, thus enabling data fusion to be conducted and an improved estimate of target location to be deduced. This paper presents a sensor fusion testbed for fusing data from multiple sensors. Initially, a simple, optimal static fusion scheme is illustrated, then focusing on air-to-ground targeting applications example results are given for single and multiple platform sorties involving a variety of sensor combinations. Consideration is given to the most appropriate sensor mix across single and multiple aircraft, as well as architectural implementation issues and effects. The sensitivity of the fusion method to key parameters is then discussed before some conclusions are drawn about the behavior, implications and benefit of this approach to improving targeting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Area surveillance for guarding and intruder detection with a combined camera radar sensor is considered. This specific sensor combination is attractive since complementary information is provided by the respective elements. Thus, a more complete description of objects of interest can be obtained. Several strategies to fuse the data are discussed. Results obtained with live experiments are presented. When compared to camera only, a significant reduction of the number of false tracks is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of radar and EO-sensors is investigated for the purpose of surveillance in littoral waters is. All sensors are considered to be co-located with respect to the distance, typically 1 to 10 km, of the area under surveillance. The sensor suite is a coherent polarimetric radar in combination with a set of camera's sensitive to visible light, near infrared, mid infrared and far infrared. Although co-located, the sensors are dissimilar and not necessarily synchronized. A critical aspect for beneficial fusion in this application is correct association of information from these sensors. Various architectures are considered and it will be argued that a fuse while track algorithm is the most suitable algorithm in this case. Discussed is how such an algorithm is designed and applied. To improve association reliability also non-kinematic features of both sensor types are considered. Investigated in particular is, which features from contacts measured with the polarimetric radar and the EO-sensors are correlated. These features and their correlations are incorporated in the tracking process. Preliminary results are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Derived Radar and Infra-red Voxel Energy operator (DRIVE) has been proposed for the fusion of IR and radar data as a pre-detection nonlinear operation of the soft decision type. The radar and IR data representing the range and cross range information respectively. Most IR and radar data fusion operators fuse the data after feature extraction whereas the DRIVE operator fuses data prior to information extraction. The advantage being that no information is considered redundant prior to the fusion. An experiment has been made to verify the functionality of the DRIVE operator. Data are simultaneously collected with the DDRE (Danish Defense Research Establishment) MRS high range resolution radar and a commercially available IR camera. Radar Cross Section as well as the temperature of the target are fully controlled and a dataset (DRIVE I) is compiled, covering a range of signal to noise ratios. The DRIVE operator is applied to the dataset with focus on the region where neither radar data nor IR data alone gives an extractable signal. Statistical analyses of the results are applied to test the hypothesis of improved target detection at marginal signal-to-noise ratios. In general the DRIVE operator is a Copula that should place the probability mass where it benefits fusion. The applied Copula is static and obtained from the product of the voltage received from each range gate and each pixel (i.e. power) normalized by taking the square root. An adaptive optimization procedure, aimed at identifying the best Copula under predefined conditions, is considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the Mean-Field Bayesian Data Reduction Algorithm is developed that adaptively trains on data containing missing values. In the basic data model for this algorithm each feature vector of a given class contains a class-labeling feature. Thus, the methods developed here are used to demonstrate performance for problems in which it is desired to adapt the existing training data with data containing missing values, such as the class-labeling feature. Given that, the Mean-Field Bayesian Data Reduction Algorithm labels the adapted data, while simultaneously determining those features that provide best classification performance. That is, performance is improved by reducing the data to mitigate the effects of the curse of dimensionality. Further, to demonstrate performance, the algorithm is compared to the classifier that does not adapt and bases its decisions on only the prior training data, and also the optimal clairvoyant classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system is developed for tracking moving objects though natural scenery. A technique is presented for performing change detection on imagery to determine the difference between two images or a sequence of images. Form there an algorithm is presented to detect the presence of a new object and/or the deletion of objects. Then the application of a Variable Structure Interacting Multiple Model tracking filter is presented. The method of performing change detection is based upon the concept of image subspace projection. A set of basis image maps are formed when combined with a mixing matrix can recreate the original image. The subsequent images are then projected into the base image. The projected images is then subtracted from the original image to perform the change detection. Spatial Filtering is applied to increase the contrast between the change and the background then an adaptive filter is then applied to pass the locations of changes in the images into the tracking filter. Tracking is performed through the use of multiple motion models. The filter's motion models are adaptive added or deleted as required by the moving object's dynamics. The moving object's state is estimated through extended Kalman filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurements of a reduced Mueller matrix in backscattering from diffusive, dielectric targets are reported as a function of the angle of incidence. It was found that the off-diagonal elements depend greatly on the angle of incidence, increasing to a maximum near grazing incidence. A theoretical model that accounts for the non-trivial behavior in the off-diagonal elements of the Mueller matrix is presented. We comment on the applicability of this model to the determination of the shape of the targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and testing. Confusion matrices calculated in both network training and testing for 4 facial expressions (smile, anger, sadness, and surprise) are used to evaluate the performance of the trained network. By extensive simulations it is shown that when compared with the BP-based method, the proposed technique constructs OHL- FNN with significantly smaller number of hidden units and weights, and simultaneously yielding improved recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A standard prerequisite for object recognition in image processing is the computation of features. The features are subsequently employed by a classificator to classify objects into classes. As feature candidates geometrical invariants are often used to classify objects in binary images. Objects in grey scale images however have an additional contrast property. In order to classify objects correctly which are geometrically similar, but possess different contrast into the same class, geometrically as well as contrast invariant features are required. In this paper the concept of physical similarity is used to compute geometrically and contrast invariant features from objects in grey scale images. The images are represented by a two-dimensional intensity function. The introduction of a third variable which represents the grey-scale leads to a three-dimensional image function. Furthermore, physical dimensions are assigned to the intensity function consistently and lead to dimensional higher order moments. By the use of dimensional analysis dimensionless moments can be computed, which are invariant against geometric transformations and changes in contrast. The three-dimensional intensity function lies in the Hilbert Space of quadratic integrable functions and can thus be expanded into a general Fourier Series. As shown in previous work, it is therefore possible to recompute objects from their features. This back transform from feature space to object space can be used to examine and visualize the class-boundaries through the construction of a feature-editor for image features. By this means the use of dimensionless moments for geometrically and contrast invariant classification will be investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present two methods for increasing the spatial resolution of images using image sequences where all frames contain the same static scene with unknown shifts. Because of the subpixel shifts, aliased frequencies appear in a slightly different way in all images, making it possibly to reconstruct frequencies above the Nyqvist frequency, thus improving the resolution. To this end, we estimate parameters in the affine transform relating the images to each other from the sequence. To show the applicability of the algorithms, many experiments have been carried out mainly using image sequences captured by a TV-camera and not only using synthetic image sequences. The results from one TV-camera sequence are presented in this report. Measurements of PSF and MTF have been carried out and the results show that we can increase the spatial resolution by almost a factor of two. This technique can be used for target identification/recognition as well as for visualization. The second method (interpolation) is possible to implement in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Metropolis Monte Carlo (MMC) technique performance is compared to the performance of an iterative deconvolution method. In the MMC approach to deconvolution two Monte Carlo Procedure (MCP) are run at the same time. In one the blurred data is used as a distribution function for selection of pixels. And the second MCP decides whether to place a grain in the true data (true input) or not. We show that this approach improves the reconstruction process. The blurred data is obtained by convolving a 24 points input signal that has three peaks with a 21 points wide Guassian impulse response function (IRF). The Mean Squared Error (MSE) is used to compare the two techniques. The MSE is calculated by comparing the reconstructed input signal with the true input signal. The Signal-to-Noise Ratios (SNR) studied range from 10 to 150. The type of noise used in this study is Gaussian Distributed additive noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is need to replace hazardous radioluminescent light sources with a means of illumination that is environmentally friendly. This paper describes an electronic source that was developed as a potential candidate to replace low intensity tritium in a military system. It employs an LED for illumination and a 3-volt coin cell battery as a power source. This new light source is electronically invisible, requires minimal maintenance, and provides the lowest practical illumination to preclude detection by optical means. The low intensity requires that the LED be driven at DC current levels resulting in poor luminous efficiency. Therefore, in an effort to maximize battery life, the LED is pulsed into a more optically efficient mode of operation. However, conventional pulsing techniques are not employed because of concerns the electronics could be identified by conspicuous power spectral density (PSD) components in the electromagnetic spectrum generated by a pulsed LED. Therefore, flicker noise concepts have been employed to efficiently drive the LED while generating a virtually undetectable spectral signature. Although ideally the pulse durations, magnitudes, and spacings should be random, a significant reduction in conspicuous PSD components can be achieved when imposing practical constraints. The dominant components of the power spectrum are significantly reduced using fixed pulse durations and magnitudes while varying only the pulse spacing. The mean duty cycle is set to provide the same effective illumination as DC operation while generating a PSD normally associated with natural phenomena.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work a novel approach was described which used automatic target detection together with compression techniques to achieve intelligent compression by exploiting knowledge of the image content. In this paper an extension to this work is presented in which a set of standard feature detectors such as HV-quadtrees, approximate entropy and phase congruency are used as target discriminators. These detectors all attempt to find potential areas of interest within an image but will undoubtedly be slightly different in their estimates. A probabilistic (Bayesian belief) network is then used to fuse this information into a single hypothesis of interesting areas within an image. A wavelet- based decomposition can then be applied to the image in which selective destruction of wavelet coefficients is performed outside the cued areas of interest (in effect concentrating the wavelet information into the required areas) prior to the encoding with a version of the progressive SPIHT encoder. One of the difficulties with this approach can be when large quantities of wavelet coefficients are discarded, this can potentially lead to abrupt changes at a mask boundary resulting in (visually) undesirable effects in the reconstructed image. An improvement to this is to modify the fused feature image using morphology in order to arrive at a multi-level fuzzy mask. This can then be used to gradually reduce the significance of coefficients as the distance from the mask increases. Results will illustrate how this approach can be used for the detection and compression of airborne reconnaissance imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new algorithm is developed to implement automatic eye tracking without prior reference model and prior knowledge of size, orientation, shape, color and other data for the human eyes. The algorithm is based on the analysis of the eye features in eye contrast, eye blinking, and other properties. It consists of two stages. In the initialization stage the algorithm locates the approximate head location from two consecutive video frames. The size of three same size blocks is determined. They are used to detect the left and right eyes. Two eyes are symmetric and blink simultaneously at all the time. The algorithm extracts the similarity features of two eyes and dissimilarity of eyes from the region between eyes which is represented by middle block. The measures are implemented by the analysis of correlation and horizontal contrast property. The algorithm is able to detect the eye status of blinking eyes and closed eye for a period of time in a video frame sequence. This algorithm is a dynamic automatic eye tracking system that can adapt the environment change and reinitialize if the tracking is lost. The experiments of this method show satisfactory results in term of accuracy and reasonable time complexity. It shows that the method can be applied to eye tracking regardless of skin color, orientation of head, size of head, background changing, or other constraints. The experiments are conducted in targeting to a moving head by 20 frames/second video sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design and development of a generic Monitor, Recorder and Simulator Unit (MRSU) which can be used during the development, trials and operational phases of a radar signal processing unit (SPU). The Monitor permits internal radar parameters and variables form within the processing chain to be retrieved, viewed and changed in real-time. The Recorder can be used to record and manipulate real-time data, to playback (real and synthetic) data into the radar SPU and record the outputs. The Simulator can be used in conjunction with a radar reference model, data from the Recorder or any other pre-recorded data, to verify the accuracy of the SPU implementation. The design of the MRSU adheres to three constraints: (a) shall be re-usable across other radar systems (i.e. generic) (b) Shall employ only commercial-off-the-shelf (COTS) hardware and software (c) Shall be an integrated unit. Traditional methods fail to exploit growth opportunities and flexibility generated by utilizing COTS and reuse. It is demonstrated that focusing on these issues form the outset results in improved productivity, quality, reduced timescales and economic benefit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique is presented for generating a modified cross- spectral density matrix (CSDM) that is sensitive to fluctuations in the frequency bin amplitudes and phases of sinusoidal signals (plus noise) and noise. The beamformed output of such a fluctuation-based CSDM provides signal-to- noise ratio (SNR) gains in excess of those achieved using a conventional CSDM. The additional gains are inversely proportional to the beamformed ratio of the signal bin fluctuations to the noise bin fluctuations. An unalerted auto-detection capability is another advantage of this new technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of investigation was developing the data fusion algorithms dealing with the aerial and cosmic pictures taken in different seasons from the differing view points, or formed by differing kinds of sensors (visible, IR, SAR). This task couldn't be solved using the traditional correlation based approaches, thus we chose the structural juxtaposition of the stable characteristic details of pictures as the general technique for images matching and fusion. The structural matching usually was applied in the expert systems where the rather reliable results were based on the target specific algorithms. In the contrast to such classifiers our algorithm deals with the aerial and cosmic photographs of arbitrary contents for which the application specific algorithms couldn't be used. To deal with the arbitrary images we chose a structural description alphabet based on the simple contour components: arcs, angles, segments of straight lines, line branching. This alphabet is applicable to the arbitrary images, and its elements due to their simplicity are stable under different image transformations and distortions. To distinguish between the similar simple elements in the huge multitudes of image contours we applied the hierarchical contour descriptions: we grouped the contour elements belonging to the uninterrupted lines or to the separate image regions. Different types of structural matching were applied: the ones based on the simulated annealing and on the restricted examination of all hypotheses. The matching results reached were reliable both for the multiple season and multiple sensor images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Goal lattices are a method for ordering the goals of a system and associating with each goal the value of performing that goal in terms of how much it contributes to the accomplishment of the topmost goal of a system. This paper presents a progress report on the development of a web-based implementation of the George Mason University Goal Lattice Engine (GMUGLE). GMUGLE allows a user to interactively create goal lattices, add/delete goals, and specify their ordering relations through a web-based interface. The database portion automatically computes the GLB and LUB of pairs of goals which have been entered to form them into a lattice. Yet to be implemented is the code to input goal values, automatically apportion the values among included goals, and accrue value among the included goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.