PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Previous researches related to geolocation based on the time difference of arrival (TDOA) technique focused mainly on solving the nonlinear equations that relate the TDOA measurements to the unknown source location. They, however, considered a rather simplistic scenario: a single emitter with no possibility of either missed detections, or false measurements. In real world scenarios, one must resolve the important issue of measurement-origin uncertainty, before applying these techniques. This paper proposes an algorithm for the geolocation and tracking of multiple emitters in practical scenarios. The focus is on solving the all important data association problem, i.e., deciding from which target, if any, a measurement originated. A previous solution for data association based on the assignment formulation for passive measurement tracking systems relied on solving two assignment problems: an S-dimensional (or, SD, where S ≥ 3) assignment for association across sensors, and a 2D assignment for measurement-to-track association. Here, an (S + 1)D assignment algorithm, which performs the data association in one step, is introduced. As can be seen later, the (S+1)D assignment formulation reduces the computational cost significantly. Incorporation of correlated measurements (which is the case with TDOA measurements) into the SD framework that typically assumes uncorrelated measurements, is also discussed. The nonlinear TDOA equations are posed as an optimization problem, and solved using SolvOpt: a nonlinear optimization solver. The interacting multiple model (IMM) estimator is used in conjunction with the unscented Kalman filter (UKF) to track the geolocated emitters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting and tracking a moving ground target in radar imagery is a challenge intensified by clutter, sensor anomalies, and the substantial signature variations that occur when a target's aspect angle changes rapidly. In its GMTI mode, a radar produces range-Doppler images that contain both kinematic reports and shape features. An HRR signature, when formed as the Fourier transform of the range-Doppler image across its Doppler dimension, becomes a derived measurement and an alternative source of identity information. Although HRR signatures can vary enormously with even small changes in target aspect, such signatures were vital for associating kinematic reports to tracks in this work. This development started with video phase history (VPH) data recorded from a live experiment involving a GMTI radar viewing a single moving target. Since the target could appear anywhere in the range-Doppler image derived from the VPH data, the goal was to localize it in a small range-Doppler "chip" that could be extracted and used in subsequent research. Although the clutter in any given VPH frame generally caused false chips to be formed in the full range-Doppler image, at most one chip contained the target. The most effective approach for creating any chip is to ensure that the object is present in the return from each pulse that contributes to that chip, and to correct any phase distortions arising from range gate changes. Processing constraints dictated that the algorithm for target chip extraction be coded in MATLAB with a time budget of a few seconds per frame. Furthermore, templates and shape models to describe the target were prohibited. This paper describes the nonlinear filtering approach used to reason over multiple frames of VPH data. This nonlinear approach automatically detects and segments potential targets in the range-Doppler imagery, and then extracts kinematic and shape features that are tracked over multiple data frames to ensure that the real target is in the declared chip. The algorithm described was used successfully to process over 84,000 frames of real data without human assistance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance tracking is rapidly becoming an important application for GMTI radars. Surveillance tracking differs from precision tracking primarily in the scope of the problem being considered. Where precision tracking focuses primarily on the highly accurate location of a few numbers of targets, surveillance tracking is more interested in understanding the general location of large numbers of targets.
Several challenges arise as one attempts to directly apply techniques from precision tracking applications to the surveillance realm. In the surveillance problem using GMTI radars, the revisit rate is typically lower due to a larger area that must be considered. As a result, in all but the most benign environments, it is difficult to generate estimates of individual targets. This challenge is further compounded by poor sensor performance which results in large uncertainty in target positions and ambiguity from closely spaced targets. Given these issues, new techniques are required for addressing the surveillance tracking problem.
Our approach is to treat the surveillance problem in a slightly different fashion. Rather than attempting to track each of the individual targets in the surveillance region, we will focus on the bigger picture and track the groups of targets. This generalization will result in measurement not being individual radar returns off of a single target but rather a clustered grouping of detections representing a single group detection with both a location and group size. In this way, we are able to provide a true group tracking solution rather than attempting cluster the tracks of individual targets. In order to perform this task it is necessary to cluster detections of targets into group measurements, estimate the size of the group, and to provide an estimate of the location of the group. This paper will describe alternative approaches for clustering of detections and an examination of their performance in the overall group tracking approach. Additionally we will describe a technique for estimating the size of the group using knowledge of sensor performance characteristics and the number of detections that are clustered together. Finally we will describe a method for generating a reasonable estimate of the location of the group. We will conclude the paper with an example that examines the overall system performance on a representative problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combinatorial optimization problem of multidimensional assignment has been treated with renewed interest because of its extensive application in target tracking, cooperative control, robotics and image processing. In this work, we particularly concentrate on data association in multisensor-multitarget tracking algorithms, in which solving the multidimensional assignment is an essential step. Current algorithms generate good suboptimal solutions to these problems in pseudo-polynomial time. However, in dense scenarios these methods can become inefficient because of the resulting dense candidate association tree. Also, in order to generate the top m (or ranked) solutions these algorithms need to solve a number of optimization problems, which increases the computational complexity significantly.
In this paper we develop a Randomized Heuristic Approach (RHA) for multidimensional assignment problems with decomposable costs (likelihoods). Unlike many assignment algorithms the RHA does not need the complete candidate assignment tree to start with. Instead, it constructs this tree as required. Results show that the RHA requires only a small fraction of the assignment tree and these results in a considerable reduction of computational cost. Results show that the RHA, on an average, produces better solutions than those produced by Lagrange relaxation-based multidimensional assignment algorithm which has higher computational complexity. Also, using the different solutions obtained in RHA iterations, top m solutions can be constructed with no further computational requirement. These solutions can be utilized in a soft decision based algorithm which performs much better than hard decision based algorithm, as shown in this paper by a ground target tracking example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many optimization problems that arise in multi-target tracking and fusion applications are known to be NP-complete, ie, believed to have worst-case complexities that are exponential in problem size. Recently, many such NP-complete problems have been shown to display threshold phenomena: it is possible to define a parameter such that the probability of a random problem instance having a solution jumps from 1 to 0 at a specific value of the parameter. It is also found that the amount of resources needed to solve the problem instance peaks at the transition point.
Among the problems found to display this behavior are graph coloring (aka clustering, relevant for multi-target tracking), satisfiability (which occurs in resource allocation and planning problem), and the travelling salesperson problem.
Physicists studying these problems have found intriguing similarities to phase transitions in spin models of statistical mechanics. Many
methods previously used to analyze spin glasses have been used to explain some of the properties of the behavior at the transition point. It turns out that the transition happens because the fitness landscape of the problem changes as the parameter is varied. Some algorithms have been introduced that exploit this knowledge of the structure of the fitness landscape. In this paper, we review some of the experimental and theoretical work on threshold phenomena in optimization problems and indicate how optimization problems from tracking and sensor resource allocation could be analyzed using these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared cameras can detect the heat signatures of missile plumes in the mid-wave infrared waveband (3-5 microns) and are being developed for use, in conjunction with advanced tracking algorithms, in Missile Warning Systems (MWS). However, infrared imagery is also liable to contain appreciable levels of noise and significant levels of thermal clutter, which can make missile detection and tracking very difficult. This paper discusses the use of motion-based methods for the detection, identification and tracking of missiles: utilising the apparent motion of a missile plume against background clutter. Using a toolbox designed for the evaluation of missile warning algorithms, algorithms have been developed, tested and evaluated using a mixture of real, synthetic and composite infrared imagery
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The degree of uncertainty in a track's position provides an indication of how best to allocate sensor resources. In this paper, we discuss the use of this track parameter for two tasking activities. First, we consider determining resolution selection for electro-optical (EO) sensor tasks. Secondly, we integrate our knowledge of kinematics into the process of information-theoretic task selection. We consider the use of optimized selection of the resolution for imaging sensors such that we maximize what we term the probability of acquiring a target. The probability of acquiring, if a particular resolution is chosen, is a function of both the probability of the target being within the sensor footprint as well as the probability the target is detected given it is in the footprint. In the process of selection of sensor tasks, Toyon employs an information-theoretic metric. We apply conditionals on the entropy that are a function of the uncertainty with which the track will actually be detected during the sensor task. A tracker, based on the Kalman filter, is used to provide an estimate of position and an associated two dimensional error covariance. The kinematic information is used to compute the probability a target is within a footprint, whose size is based on the resolution. For simulation we employ the high fidelity Toyon-developed SLAMEM testbed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The author in previous publications illustrated the need for better understanding the role of Connection Management (CNM) in spatially and geographically diverse distributed sensor networks. This construct is re-examined in a conceptual CNM architectural framework. The purpose of Connection Management is to provide seamless demand-based resource-allocation and sharing of the information products. For optimum distributed information fusion performance, these systems must minimize communications delays and maximize message throughput, reduce or eliminate out-of-sequence measurements, take into account data pedigree and at the same time optimally allocate bandwidth resources and/or encode track data (sources of information) for optimum distributed estimation of target state. In order to achieve overall distributed "network" effectiveness, these systems must be adaptive, and be able distribute data on demand basis in real-time. While the requirements for these systems are known, research in this area has been fragmented. Related problems, goals and potential solutions are explored highlighting the need for a multi-disciplinary approach among communications, estimation, information and queuing theory, networking, optimization and fusion communities. A CNM conceptual architecture and simulation results are illustrated for optimum demand-based bandwidth allocation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous papers have introduced the concept of goal lattices (GL) and the GMUGLE(tm) software for assisting the user in entering and ordering a set of goals into a goal lattice as well as assigning relative values to them. The previous assumption was that the GL was static and computed the relative values of the search, track, and ID functions for a reconnaissance mission. For more complex missions in a dynamic environment with expected changes in operational mode, the concept of dynamic goals is introduced. Dynamic goals are instantiated from a set of predefined goals along with their interconnection into the preexisting mission GL. This instantiation is done by the platform sensor manager part of the mission manager and represents a concurrent information request which exists until the platform sensor manager uninstantiates it. A representative example of how goal instantiation is implemented is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a dynamic battlespace the utility of decisions is a sensitive function of their timeliness. The allocation of weapons, and assignment of sensing and communications resources are examples where timely decisions are crucial to long-term survival. In this paper, we investigate the application of information-based decision-making theory to a data fusion network in such an environment. Specifically, we examine the advantages of decentralised information-based control for sensor-to-target assignment for identification as well as weapon allocation, and compare different utilities for optimising the combat strategy and therefore the life of the network.
In our application a positive identification is required before a weapon can be allocated to the target. The quality of sensor-to-target assignment is a key factor in determining whether incoming threats can be identified and destroyed before they can act. The two aspects of information-based control in our scenario are sensor-to-target assignment and weapon allocation. These can be treated as sequential decision-making processes, or more optimally as an integrated process. In the sequential approach, the decisions from the sensor-to-target assignment are simply propagated. The integrated approach assigns sensors to targets on the basis of the utility of the information that can be gained in the context of the weapon allocation decisions that must be made. This concept is also known as information value. Both approaches are considered here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management III
In this paper we present the development of a multisensor-multitarget tracking testbed for large-scale distributed (or network-centric) scenarios. The project, which is in progress at McMaster University and the Royal Military College of Canada, is supported by the Department of National Defence and Raytheon Canada. The objective is to develop a testbed capable of handling multiple, heterogeneous sensors in a hierarchical architecture for maritime surveillance. The testbed consists of a scenario generator that can generate simulated data from multiple sensors including radar, sonar, IR and ESM as well as a tracker framework into which different tracking algorithms can be integrated. In the first stage of the project, the IMM/Assignment tracker, and the Particle Filter (PF) tracker are implemented in a distributed architecture and some preliminary results are obtained. Other trackers like the Multiple Hypothesis Tracker (MHT) are also planned for the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the DARPA Video Verification of Identity (VIVID) program is to develop an automated video-based ground targeting system for unmanned aerial vehicles. The system comprises several modules that interact with each other to support tracking of multiple targets, confirmatory identification, and collateral damage avoidance. The Multiple Target Tracking (MTT) module automatically adjusts the camera pan, tilt, and zoom to support kinematic tracking, multi-target track association, and confirmatory identification. The MTT system comprises: (i) a video processor that performs moving object detection and feature extraction, including object position and velocity, (ii) a multiple hypothesis tracker that processes video processor reports to generate and maintain tracks, and (iii) a sensor resource manager that aims the sensor to improve tracking of multiple targets. This paper presents a performance assessment of the current implementation of the MTT under several operating conditions. The evaluation is done using pre-recorded airborne video to assess the ability of the video tracker to detect and track ground moving objects over extended periods of time. The tests comprise a number of different operational conditions such as multiple targets and confusers under various levels of occlusion and target maneuverability, as well as different background conditions. The paper also describes the challenges that still need to be overcome to extend track life over long periods of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As more and more nonlinear estimation techniques become available, our interest is in finding out what performance improvement, if any, they can provide for practical nonlinear problems that have been traditionally solved using linear methods. In this paper we examine the problem of estimating spacecraft position using conical scan (conscan) for NASA's Deep Space Network antennas. We show that for additive disturbances on antenna power measurement, the problem can be transformed into a linear one, and we present a general solution to this problem, with the least square solution reported in literature as a special case. We also show that for additive disturbances on antenna position, the problem is a truly nonlinear one, and we present two approximate solutions based on linearization and Unscented Transformation respectively, and one "exact" solution based on Markov Chain Monte Carlo (MCMC) method. Simulations show that, with the amount of data collected in practice, linear methods perform almost the same as MCMC methods. It is only when we artificially reduce the amount of collected data and increase the level of noise that nonlinear methods offer better accuracy than that achieved by linear methods, at the expense of more computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heating, Ventilation and Air Conditioning (HVAC) systems constitute the largest portion of energy consumption equipment in residential and commercial facilities. Real-time health monitoring and fault diagnosis is essential for reliable and uninterrupted operation of these systems. Existing fault detection and diagnosis (FDD) schemes for HVAC systems are only suitable for a single operating mode with small numbers of faults, and most of the schemes are systemspecific. A generic real-time FDD scheme, applicable to all possible operating conditions, can significantly reduce HVAC equipment downtime, thus improving the efficiency of building energy management systems. This paper presents a FDD methodology for faults in centrifugal chillers. The FDD scheme compares the diagnostic performance of three data-driven techniques, namely support vector machines (SVM), principal component analysis (PCA), and partial least squares (PLS). In addition, a nominal model of a chiller that can predict system response under new operating conditions is developed using PLS. We used the benchmark data on a 90-ton real centrifugal chiller test equipment, provided by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), to demonstrate and validate our proposed diagnostic procedure. The database consists of data from sixty four monitored variables under nominal and eight fault conditions of different severities at twenty seven operating modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, Combat Systems have to counter different kinds of threats including surface targets and air targets. In that last case, fast-moving incoming missiles such as BUNT missiles turn out to be the most threatening ones. Indeed, their maneuver capabilities are difficult to be managed by tracking processes. Plus, when a defense missile launched by the fire system is remote conrolled, threat maneuvers may decrease the interception capabilities of the Combat System i.e. the Probability to Kill (PK).
The article addresses this situation. A practical aglorithm of data fusion is presented; the case studied is a short-range fire control platform. The Combat System features a dual-sensor system that combines a Track Radar and a Thermal Imager while a Missile Guidance Unit (MGU) takes into account the incoming target and the Defense Missile tracks to remote control the missile towards its target.
Then, statistic results of tracking performances and interception sequences are given for different incoming missile scenarios and sensors suites (radar-alone configuration, combination of the active sensor with a Thermal Imager). They highlight how data fusion reacts with the overall anti-missiles fire system performances and especially its miss distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a WAVELET-based signal classification technique using Generalized Gaussian Distributions for obtaining the smallest set of wavelet bases required to discriminate a set of High Range Resolution (HRR) target class signals. The HRR target recognition approach utilizes a best-bases algorithm that relies on wavelets and local discriminants for feature extraction and for minimizing the dimension of the original target signal. The feature vectors are wavelet bases that capture unique information on each set of target class signals; we examined six target classes in this study. To reduce the feature space and to obtain the most salient features for classification, we used a wavelet-based feature extraction method that relies on generalized Gaussians and Principle Component Analysis techniques. The method involved accurately models the marginal distributions of the wavelet coefficients using a generalized Gaussian density (GGD). Principle Component Analysis was used to obtain the best set of super-wavelet coefficients/features comprising each target class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the benefits of using reconnaissance and targeting imagery in the delivery of air-to-ground guided munitions. In particular, the paper considers the use of third-party imagery to improve the accuracy of scene-matching object localisation algorithms and to improve the delivery accuracy of air-launched seeker-guided weapons. The analysis focuses on a simulated engagement, consisting of an infrared imager placed on an airborne reconnaissance platform, a fast-jet delivery aircraft equipped with a modern electro-optical targeting pod, a seeker-guided weapon model, and a ground target moving in a highly cluttered environment. The paper assesses different strategies for utilizing the target position data from the three imaging systems (reconnaissance, targeting pod and weapon seeker).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of military reconnaissance systems will generate larger images, with more bits per pixel, combined with improved spatial, spectral and temporal resolutions. This increase in the quantity of imagery requiring analysis to produce militarily useful reports, the "data deluge", will place a significant burden upon dissemination systems and upon traditional, largely manual, exploitation techniques. To avoid the possibility that imagery derived from expensive assets may go unexploited, an increased use of automated imagery exploitation tools is required to assist the Image Analysts (IAs) in their tasks.
This paper describes the fully automated generation of time sequence datacubes from mixed imagery sources as cueing aids for IAs. In addition to facilitating quick and easy visual inspection, the datacubes provide the prealigned image sets needed for exploitation by some automated change detection and target detection algorithms. The ARACHNID system under development handles SAR, IR and EO imagery and will align image pairs obtained with widely differing sun, view and zenith angles. Edge enhancement pre-processing is employed to increase the similarity between images of disparate characteristics.
Progress is reported on the automation of this registration task, on its current performance characteristics, its potential for integration into an operational system, and on its expected utility in this context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection plays an important role in different military areas as strategic reconnaissance, verification of armament and disarmament control and damage assessment. It is the process of identifying differences in the state of an object or phenomenon by observing it at different times. The availability of spaceborne reconnaissance systems with high spatial resolution, multi spectral capabilities, and short revisit times offer new perspectives for change detection. Before performing any kind of change detection it is necessary to separate changes of interest from changes caused by differences in data acquisition parameters. In these cases it is necessary to perform a pre-processing to correct the data or to normalize it. Image registration and, corresponding to this task, the ortho-rectification of the image data is a further prerequisite for change detection. If feasible, a 1-to-1 geometric correspondence should be aspired for. Change detection on an iconic level with a succeeding interpretation of the changes by the observer is often proposed; nevertheless an automatic knowledge-based analysis delivering the interpretation of the changes on a semantic level should be the aim of the future. We present first results of change detection on a structural level concerning urban areas. After pre-processing, the images are segmented in areas of interest and structural analysis is applied to these regions to extract descriptions of urban infrastructure like buildings, roads and tanks of refineries. These descriptions are matched to detect changes and similarities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
The conventional extended Kalman filter (EKF) is based on Taylor's series approximation of nonlinear motion and measurement models. The EKF cannot be directly generalized to multitarget situations since the multitarget state variable X = {x1,...,xn} exhibits discontinuous jumps in target number: X = 0, X = {x1}, X = {x1, x2}, etc. However, it is possible to extend X to a continuous multitarget state X = {x1,..., xn} via the concept of a point target-cluster x = (a,x) where x is interpreted as multiple targets co-located at target-state x, the expected number of which is a. We generalize the EKF to multitarget problems. We illustrate the approach by deriving predictor and corrector equations for a single-sensor, single-target EKF that integrates the functions of detection and tracking in the presence of missed detections and false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The particle filter is an effective technique for target tracking in the presence of nonlinear system model, nonlinear measurement model or non-Gaussian noise in the system and/or measurement processes. In this paper, we compare three particle filtering algorithms on a spawning ballistic target tracking scenario. One of the algorithms, the tagged particle filter (TPF), was recently developed by us. It uses separate sets of particles for separate tracks. However, data association to different tracks is interdependent. The other two algorithms implemented in this paper are the probability hypothesis density (PHD) algorithm and the joint multitarget probability density (JMPD). The PHD filter propagates the first order statistical moment of multitarget density using particles. While, the JMPD stacks the states of a number of targets to form a single particle that is representative of the whole system. Simulation results are presented to compare the performances of these algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many nonlinear filtering (NLF) algorithms have been proposed in recent years for application to single- and multi-target detection and tracking. A methodology for preliminary test and evaluatin (PT&E) of these algorithms is becoming increasingly necessary. Under U.S. Army Research Office funding, Scientific Systems Co. Inc. and Lockheed Martin are developing a Multi-Environment NLF Tracking Assessment Testbed (MENTAT) to address this need. Once completed, MENTAT is to provide a "hierarchical" series of preliminary test and evaluation (PT&E) Monte Carlo simulated environments (including benchmark problems) of increasing difficulty and realism. The simplest MENTAT environment will consist of simple 2D scenarios with simple Gaussian-noise backgrounds and simple target maneuvers. The most complicated environments will involve: (1) increasingly more realistic simulated low-SNR backgrounds; (2) increasing motion and sensor nonlinearity; (3) increasingly higher state dimensionality; (4) increasing numbers of targets; and so on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of detecting and tracking an unknown number of submarines in a body of water using a known number of moving sonobuoys. Indeed, we suppose there are N submarines collectively maneuvering as a weakly interacting stochastic dynamical system, where N is a random number, and we need to detect and track these submarines using M moving sonobuoys. These sonobuoys can only detect the superposition of all submarines through corrupted and delayed sonobuoy samples of the noise emitted from the collection of submarines. The signals from the sonobuoys are transmitted to a central base to analyze, where it is required to estimated how many submarines there are as well as their locations, headings, and velocities. The delays induced by the propagation of the submarine noise through the water mean that novel historical filtering methods need to be developed. We summarize these developments within and give initial results on a simplified example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor management in support of situation assessment (SA) presents a daunting theoretical and practical challenge. We demonstrate new results using a foundational, joint control-theoretic approach to SA and SA sensor management that is based on three concepts: (1) a "dynamic situational significance map" that mathematically specifies the meaning of tactical significance for a given theater of interest at a given moment; (2) an intuitively meaningful and potentially computationally tractable objective function for SA, namely maximization of the expected number of targets of tactical interest; and (3) integration of these two concepts with approximate multitarget filters (specifically, first-order multitarget moment filters and multi-hypothesis correlator (MHC) engines). Under this approach, sensors will be directed to preferentially collect observations from targets of actual or potential tactical significance, according to an adaptively modified definition of tactical significance.
Result of testing this sensor management algorithm with significance maps defined in terms of target's location, speed, and heading will be presented. Testing is performed against simulated data, and different sensor management algorithms including the proposed are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is the first of two conference papers describing a unified approach to the problem of estimating the states of one or more temporally evolving objects, based on fusion of accumulating multi-source information, where such information can take one of two forms: ambiguous measurements or ambiguous state-estimates. It is often asserted that the probabilistic/Bayesian paradigm is too restrictive to successfully address information sources of this kind and that, consequently, alternative paradigms such as fuzzy logic, the Dempster-Shafer theory, or rule-based inference must be used instead. On the other hand, many Bayesians vigorously challenge the very credivility of such approaches. In this paper we show that, as most commonly employed, fusion of fuzzy and rule-based measurements can be subsumed within the Bayesian theory. Our approach is based on natural extensions of the familiar recursive Bayes filter. These extensions rely, in turn, on a sytematic Bayesian analysis used in conjuction with the theory of finite-set statistics (FISST).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ambiguousness of human information sources and of a PRIORI human context would seem to automatically preclude the feasibility of a Bayesian approach to information fusion. We show that this is not necessarily the case, and that one can model the ambiguities associated with defining a "state" or "states of interest" of an entity. We show likewise that we can model information such as natural-language statements, and hedge against the uncertainties associated with the modeling process. Likewise a likelihood can be created that hedges against the inherent uncertainties in information generation and collection including the uncertainties created by the passage of time between information collections. As with the processing of conventional sensor information, we use the Bayes filter to produce posterior distributions from which we could extract estimates not only of the states, but also estimates of the reliability of those state-estimates. Results of testing this novel Bayes-filter information-fusion approach against simulated data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The bearing-only tracking problem arises in many radar and sonar tracking applications. Since the bearing measurement model is a nonlinear function of the target state, the filtering problem is nonlinear in nature. A great deal of attention has been focused on this problem due to the difficulty posed by the so-called high degree of nonlinearity (DoN) in the problem. However, a quantitative measure of the DoN is not calculated in previous works. It has been observed that the extended Kalman filter (EKF) in which the state vector consists of the Cartesian components of position and velocity is unstable and diverges in some cases. The range parametrized EKF (RPEKF) and particle filter (PF) have been shown to produce improved estimates for the bearing-only tracking problem. In this paper, we calculate two measures of nonlinearity, (1) the parameter-effects curvature and (2) intrinsic curvature for the bearing-only tracking problem using the differential geometry measures of nonlinearity. We present numerical results using simulated data for the constant velocity motion of a target in 2D with bearing-only measurements where the sensor platform uses a higher order motion than the target to achieve observability. We analyze the DoN by varying the distance between the target and sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter
implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each sensor report is assumed to contain at most one measurement. The implemented filter was tested in synthetic bearings-only tracking scenarios containing up to two targets in the presence of false alarms and missed measurements. The estimated target state consisted of 2D position and velocity components. The filter was capable to track the targets fairly well despite of the missing measurements and the relatively high false alarm rates. In addition, the filter showed robustness against wrong parameter values of false alarm rates. The results that were obtained during the limited tests of the filter show that the random set framework has potential for challenging tracking situations. On the other hand, the computational burden of the described implementation is quite high and increases approximately linearly with respect to the expected number of targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
Bayesian network has been applied widely in many areas such as multi-sensor fusion, situation assessment, and decision making under uncertainty. It is well known that, in general when dealing with large complex networks, the exact probabilistic inference methods are computationally difficult or impossible. To deal with the difficulty, the “anytime” stochastic simulation methods such as likelihood weighting and importance sampling have become popular. In this paper, we introduce a very efficient iterative importance sampling algorithm for Bayesian network inference. Much like the recently popular sequential simulation method, particle filter, this algorithm identifies importance function and conducts sampling iteratively. However, particle filter methods often run into the so called “degeneration” or “impoverishment” problems due to low likely evidence or high dimensional sampling space. To overcome that, this Bayesian network particle filter (BNPF) algorithm decomposes the global state space into local ones based on the network structure and learns the importance function accordingly in an iterative manner. We used large real world Bayesian network models available in academic community to test the inference method. The preliminary simulation results show that the algorithm is very promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercially available very high resolution satellite imagery has reached a sub-meter ground resolution for panchromatic imagery and a few meters of resolution for multispectral imagery (e.g., QuickBird panchromatic 0.6m and multispectral 2.4m). Ground targets such as vehicles can be clearly recognized in the panchromatic imagery, but difficult in the multispectral imagery. For automatic target detection, however, it is desired to have sub-meter multispectral imagery. This paper introduces a new wavelet integrated image fusion approach to produce a sub-meter multispectral image by combining a sub-meter panchromatic image with a several-meter multispectral image. The characteristics of the wavelet transform for spatial detail extraction and advantages of the IHS (Intensity Hue Saturation) fusion techniques are integrated. QuickBird panchromatic and multispectral images are fused. The results are compared with those of other existing image fusion techniques. Visual analyses demonstrate that the new wavelet integrated approach achieves better results for target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
HyperSpectral Imagery (HSI) of the coastal zone often focuses on the estimation of bathymetry. However, the estimation of bathymetry requires knowledge, or the simultaneous solution, of water column Inherent Optical Properties (IOPs) and bottom reflectance. The numerical solution to the simultaneous set of equations for bathymetry, IOPs, and bottom reflectance places high demands on the spectral quality, calibration, atmospheric correction, and Signal-to-Noise (SNR) of the HSI data stream.
In October of 2002, a joint FERI/NRL/NAVO/USACE HSI/LIDAR experiment was conducted off of Looe Key, FL. This experiment yielded high quality HSI data at a 2 m resolution and bathymetric LIDAR data at a 4 m resolution. The joint data set allowed for the advancement and validation of a previously generated Look-Up-Table (LUT) approach to the simultaneous retrieval of bathymetry, IOPs, and bottom type. Bathymetric differences between the two techniques were normally distributed around a 0 mean, with the exception of two peaks. One peak related to a mechanical problem in the LIDAR detector mirrors that causes errors on the edges of the LIDAR flight lines. The other significant difference occurred in a single geographic area (Hawk Channel) suggesting an incomplete IOP or bottom reflectance description in the LUT data base. In addition, benthic habitat data from NOAA’s National Ocean Service (NOS) and the Florida Wildlife Research Institute (FWRI) provided validation data for the estimation of bottom type. Preliminary analyses of the bottom type estimation suggest that the best retrievals are for seagrass bottoms. One source of the potential difficulties may be that the LUT database was generated from a more pristine location (Lee Stocking Island, Bahamas). It is expected that fusing the HSI/LIDAR data streams should reduce the errors in bottom typing and IOP estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method assuming linear phase drift is presented to improve radar detection performance. Its use is based on the assumption that the target illumination time comprises multiple coherent pulses or coherent processing intervals (CPI). For example in a conventional scanning radar, this often inaccurate information can be used for statistical data mapping to point out possible target presence. If coherent integration is desired in a beam-agile system, the method should allow sequential detection. Discussion involves a pragmatic example on the echo phase progress utilization in the constant false alarm rate (CFAR) processing of a moving target indication (MTI) system. The detection performance is evaluated with scanning radar simulations. The method has been tested using real-world recordings and some observations are briefly outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the merge at a point (MAP) algorithm to detect the vehicles convoys whose destination locations are unknown. The algorithm will predict the merged vehicles identification numbers in an iterative manner. We applied this method using the simulated Ground Moving Target Indicator (GMTI) data. The technique is similar to the dead reckoning and Kalman filtering algorithms. This algorithm consists of following procedures: 1) approximates the destination locations for each vehicle using its tracks, 2) validates what vehicles are going to merge at these predicted destination locations using the minimum error solution (MES), and 3) predicts the future destination locations where the vehicles will be merged at for the next iteration. This algorithm will be iteratively processed until predicted destination locations converge. We can use this algorithm to associate the vehicles that will merge to some unknown destination locations. It also has the potential to identify the convoy names and the threats associated with these vehicle groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use measure theoretic methods to describe the relationship between the Dempster Shafer (DS) theory and Bayesian (i.e. probability) theory. Within this framework, we demonstrated the relationships among Shafer's belief and plausibility, Dempster's lower and upper probabilities and inner and outer measures. Dempster's multivalued mapping is an example of a random set, a generalization of the concept of the random variable. Dempster's rule of combination is the product measure on the Cartesian product of measure spaces. The independence assumption of Dempster's rule arises from the nature of the problem in which one has knowledge of the marginal distributions but wants to calculate the joint distribution. We present an engineering example to clarify the concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Receiver Operating Characteristic (ROC) curve is typically used to quantify the performance of Automatic Target Recognition (ATR) systems. When multiple classifiers are to be fused, assumptions must be made in order to mathematically combine the individual ROC curves for each of these classifiers in order to form one fused ROC curve. Often, one of these assumptions is independence between the classifiers. However, correlation may exist between the classifiers, processors, sensors and the outcomes used to generate each ROC curve. This paper will demonstrate a method for creating a ROC curve of the fused classifiers which incorporates the correlation that exists between the individual ROC curves. Specifically, we will use the derived covariance matrix between multiple classifiers to compute the existing correlation and level of dependence between pairs of classifiers. The ROC curve for the fused system is then produced, adjusting for this level of dependency, using a given fusion rule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Learning in biological systems is thought to depend on the phenomena of clustering. For example, new experiences are most easily remembered by attaching them to a category of related prior knowledge, that is, by clustering. Clustering in the brain is thought by some to involve coupling of oscillator neurons. Therefore, we investigate clustering in frequency with an all-optical system. The system consists of a set of coupled microring lasers whose natural frequencies are set to the values to be clustered. We assume that closer values represent more similar information and that the values set the microring natural frequencies. A microring resonant frequency in such a system is influenced away from its natural frequency by interactions with its neighbors. As a result, the frequencies of the microrings will adjust themselves into a few groups or clusters. Equations are developed for a pair of microring resonators to show that the microring resonators will interact to perform clustering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acoustic vehicle classification is a difficult problem due to the non-stationary nature of the signals, and especially the lack of strong harmonic structure for most civilian vehicles with highly muffled exhausts. Acoustic signatures will also vary largely depending on speed, acceleration, gear position, and even the aspect angle of the sensor. The problem becomes more complicated when the deployed acoustic sensors have less than ideal characteristics, in terms of both the frequency response of the transducers, and hardware capabilities which determine the resolution and dynamic range. In a hierarchical network topology, less capable Tier 1 sensors can be tasked with reasonably sophisticated signal processing and classification algorithms, reducing energy-expensive communications with the upper layers. However, at Tier 2, more sophisticated classification algorithms exceeding the Tier 1 sensor/processor capabilities can be deployed. The focus of this paper is the investigation of a Gaussian mixture model (GMM) based classification approach for these upper nodes. The use of GMMs is motivated by their ability to model arbitrary distributions, which is very relevant in the case of motor vehicles with varying operation modes and engines. Tier 1 sensors acquire the acoustic signal and transmit computed feature vectors up to Tier 2 processors for maximum-likelihood classification using GMMs. In a binary classification task of light-vs-heavy vehicles, the GMM based approach achieves 7% equal error rate, providing an approximate error reduction of 49% over Tier 1 only approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object segmentation is an important preprocessing step for many target
recognition applications. Many segmentation methods have been studied,
but there is still no satisfactory effectiveness measure which makes
it hard to compare different segmentation methods, or even different
parameterizations of a single method. A good segmentation evaluation
method not only would enable different approaches to be compared, but
could also be integrated within the target recognition system to
adaptively select the appropriate granularity of the segmentation
which in turn could improve the recognition accuracy. A few
stand-alone effectiveness measures have been proposed, but these
measures examine different fundamental criteria of the objects, or
examine the same criteria in a different fashion, so they usually work
well in some cases, but poorly in the others. We propose a em
co-evaluation framework, in which different effectiveness measures
judge the performance of the segmentation in different ways, and their
measures are combined by using a machine learning approach which
coalesces the results. Experimental results demonstrate that our
method performs better than the existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image mosaicking is the process of mapping an image series onto a common image grid, where the resulting mosaic forms a comprehensive view of the scene. This paper presents a near-real-time, automatic image mosaicking system that is designed to operate in real-world conditions. These conditions include arbitrary camera motion, disturbances from moving objects and annotations, and luminance variations. In the proposed algorithm, matching filters are used in conjunction with automatic corner detection to find several critical points within each image, which are then used to represent the image efficiently and accurately. Numerical techniques are used to distinguish between those points belonging to the actual scene and those resulting from a disturbance, and to determine the movement of the camera. The affine model is used to describe the frame-to-frame differences that result from camera motion. A local-adaptive fine-tuning step is used to correct the approximation error due to the use of the affine model, and to compensate for any luminance variation. The mosaic is constructed progressively as new images are being added. The proposed algorithm has been extensively tested on real-world, monocular video sequences, and it is shown to be very accurate and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple camera systems have been considered for a number of applications, including infrared (IR) missile detection in modern fast jet aircraft, and soldier-aiding data fusion systems. This paper details experimental work undertaken to test image-processing and harmonisation techniques that were developed to align multiple camera systems. This paper considers systems where the camera properties are significantly different and the camera fields of view do not necessarily overlap. This is in contrast to stereo calibration alignment techniques that rely on similar resolution, fields of view and overlapping imagery. Testing has involved the use of two visible-band cameras and attempts to harmonise a narrow field of view camera with a wide field of view camera. In this paper, consideration has also been given to the applicability of the algorithms to both visual-band and IR based camera systems, the use of supplementary motion information from inertial measurement systems and consequent system limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When processing a signal or an image using the Discrete Cosine Transform (DCT) or Discrete Sine Transform (DST), a typical approach is to extract a portion of the signal by windowing and then form the DCT or DST of the window contents. By shifting the window point by point over the signal, the entire signal may be processed. DCTs and DSTs are defined where the denominator in the transform kernel is either an odd or an even integer, resulting in transforms known as the even DCT (EDCT), even DST (EDST), odd DCT (ODCT) and odd DST (ODST). Each is available in types I to IV, for a total of 16 different transforms. The widely used transform commonly called the "DCT" is actually the EDCT-II. In this paper we extend our previous work using the EDCT-II and EDST-II, and show that a similar approach yields algorithms for the ODCT-II and ODST-II. We develop algorithms to "update" the ODCT-II and ODST-II simultaneously to reflect the modified window contents using less computation than directly evaluating the modified transform via standard Fast Transform algorithms. These algorithms are able to handle arbitrary step sizes up to the length of the transform, i.e. the algorithm simultaneously updates the ODCT-II and ODST-II to reflect inclusion of r, where 1 ≤ r ≤ N-1, additional data points and removal of r old points from the signal. Examples of applications where this algorithm would be useful include target recognition where time constraints may not permit the immediate processing of every incoming data point, adaptive system identification, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compression is a technique that is used to encode data so that the data needs less storage/memory space. Compression of random data is vital in case where data where we need preserve data that has low redundancy and whose power spectrum is close to noise. In case of noisy signals that are used in various data hiding schemes the data has low redundancy and low energy spectrum. Therefore, upon compressing with lossy compression algorithms the low energy spectrum might get lost. Since the LSB plane data has low redundancy, lossless compression algorithms like Run length, Huffman coding, Arithmetic coding are in effective in providing a good compression ratio. These problems motivated in developing a new class of compression algorithms for compressing noisy signals. In this paper, we introduce a two new compression technique that compresses the random data like noise with reference to know pseudo noise sequence generated using a key. In addition, we developed a representation model for digital media using the pseudo noise signals. For simulation, we have made comparison between our methods and existing compression techniques like Run length that shows the Run length cannot compress when data is random but the proposed algorithms can compress. Furthermore, the proposed algorithms can be extended to all kinds of random data used in various applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of phase cycle-slip (FM click noise) elimination. For analyses and application demonstration, the signal of interest is a commercial FM transmission, received and sampled for subsequent demodulation as typical in software defined radios. There are two parts to this paper. The first part will investigate the advantages of using data fitting to repair the time series in the neighbourhood of a detected click. Previous papers have considered time series which have neighbourhoods in which only one point was considered as a click, and hence, only one point needed to be repaired. We will consider the more difficult and practical case where there is a frequency modulated signal passed through a band-pass filter and a (software/digital) FM limiter discriminator used for demodulation. This receive system has the effect of causing click distortion over multiple samples, making the repairing that much more difficult. The methods of forward- backward linear prediction (FBLP or Wiener filtering), least squares polynomial fitting (LSPOLY), and twin Tukey window (TTW) filtering are discussed. The results are shown empirically, and will show that the TTW technique outperforms the FBLP and LSPOLY techniques for the presented application.
The second part of this paper will discuss potential techniques to discern samples which are clicks, from samples which are normal yet click-like. We will consider the combination of autocorrelation, kurtosis, 4th order moment, and spectral characteristics, to form a threshold detection level to identify clicks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss some implementation aspects and preliminary results of real-time Rotation and Scale Invariant (RSI) template matching on streaming images using a reconfigurable and scalable platform based on PowerFFT and FPGA hardware. The PowerFFT is world's fasted complex floating point FFT-centric DSP processor. We manipulate 2D RSI template matching algorithms in such a way that it can be decomposed in mainly 1D FFT and 1D interpolation building blocks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IR and visible sensors are commonly used in tracking and recognition system of targets. Image fusion for these sensors can effectively improve system's accuracy of tracking and detection. But sampling rates of these sensors are usually different so that a new feature level image fusion scheme is devised in this paper. This scheme is universal and can be widely used in the detection and tracking of moving target when sampling rates of these sensors are largely different (i.e. radar and visible image). This fusion scheme is divided into two parts which are asynchronous and synchronous fusion. Target's contour is represented by dynamic contour. In asynchronous fusion, for the sensor with high sampling rate a multiple sequence image fusion method is devised based on statistical filtering model to get measurement estimation of target's contour. Then in synchronous fusion a real-time differential coupling is implemented for the estimation from asynchronous fusion and the image from the sensor with low sampling rate in order to effectively restrict convergent shape of dynamic contour in visible image. Contrasting simulation experiment proves our fusion scheme's efficacy and average tracking error in visible image with fusion has decreased by 68.31%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IR and visible images are commonly used in the detection and tracking of moving target. Novel contour extraction scheme for moving target is proposed in this paper. Firstly, a motion segmentation technology is applied to get the initial contour. Secondly, dynamic contour is used to represent the initial contour and converges to target's contour. Lastly, a novel feature level fusion is proposed, which minimizes norm's square of control point vector's difference in two modal images. Image registration is not needed. Experiment on moving vehicle indicates that for visible image, average contour extraction error decreases by 58.14% after the fusion. Meanwhile, a fast iteration algorithm of dynamic contour based on Newmark method is devised, which is contrasted with Wilson method. Contrasting experiment indicates that its computational complexity decreases by 21.01%. This fusion is implemented only with control point vectors and is suitable for real time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator....
In order to overcome this issue, we have to better understand human visual object representation.
Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy.
Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise.
In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method.
In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition.
The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature selection algorithm based on artificial neural networks can be taken as a special case of architecture pruning algorithm: compute the sensitivity of network outputs against pruned features. However, these methods usually require preprocessing of data normalization, which will possibly change original data's characters that are important to classification. Neuro-fuzzy (NF) network is a fuzzy inference system (FIS) with self-study ability. We combine it with architecture pruning algorithm based on membership space and propose a new feature selection algorithm. Finally, experiments using both natural and integrated data are carried out and compared with other methods. The results approve the validity of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A continuous wave (CW) radar has been used for the detection and classification of people based on the Doppler signatures they produce when walking. When humans walk, the motion of various components of the body including the torso, arms, and legs produce a very characteristic Doppler signature. Fourier transform techniques were used to analyze these signatures and key features were identified that are very representative of the human walking motion. Data was collected on a number of human subjects and a simple classifier was developed to recognize people walking. The results of this study could have a wide range of security and perimeter protection applications involving the use of low-cost CW radars as remote sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many multiple-model (MM) algorithms for tracking maneuvering targets are available, but there are few comparative studies of their performance. This work compares seven MM algorithms for maneuvering target tracking in terms of tracking performance and computational complexity. Six of them are well known and widely used. They are the autonomous multiple-model algorithm, generalized pseudo-Bayesian algorithm of first order (GPB1), and of second order (GPB2), interacting multiple-model (IMM) algorithm, B-best based MM algorithm, and Viterbi-based MM algorithm. Also considered is the reweighted interacting multiple-model algorithm, which was developed recently. The algorithms were compared using three scenarios. The first scenario consists of two segments of tangential acceleration while the second scenario consists of two segments of normal acceleration. Both of these scenarios have maneuvers that are represented by one of the models in the model set. The third scenario, however, has a single maneuver that consists of a tangential and a normal acceleration. This type of maneuver is not overed by the model set and is used to see how the algorithms react to a maneuver outside of the model set. Based on the study, there is no clear-cut best algorithm but the IMM algorithm has the best computational complexity among the algorithms that have acceptable tracking errors. It also showed a remarkable robustness to model mismatching, and appears to be the top choice if the computational cost is of concern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.