PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper is concerned with the development of a framework in which reasonable bounds and approximations for the performance of automatic target recognition (ATR) systems may be obtained. The relative merits of evaluations focusing on various components of ATR systems are discussed. Techniques in which structural knowledge of the radar target (embodied in phenomenological and pattern models) might be used to characterize representations in pattern space, and distances in decision space are described. Preliminary results corresponding to various assumptions regarding structural constraints are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A database of 2 by 2 foot resolution ISAR images and flight data SAR images of the same vehicles has been used to compare the performance of several SAR algorithms for critical mobile target ATR. Algorithms evaluated include a full image nearest neighbor classifier, a nearest neighbor classifier using principal components image compression, a simple, reduced-dimension classifier where the features correspond to projections of the image onto the principal axes of the imaged object, and a MACE variant composite filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new polarimetric subspace target detector based on the dihedral signal model for bight peaks within a spatially extended SAR target signature has previously been developed. This new polarimetric subspace target detector uses a very general spherically invariant random vector (SIRV) model with additive clutter for the SAR data. The SIRV model includes both the Gaussian and K-distributions which are commonly used to model SAR data. In this paper, we present performance results for several versions of this new target detector against real and simulated SAR data. We show that the Gaussian dihedral detector does a better job of separating a set of tactical military targets from natural clutter compared to detectors that assume no knowledge about the polarimetric structure of the target signal including the PWF detector. We show that the K-distribution version of the dihedral detector performs very poorly against real SAR data. The GLRT used to develop the dihedral detector has no guarantee of optimality. We explore the accuracy of our signal model, and the accuracy and robustness of our parameter estimators to reveal the limitations of the GLRT approach for non-Gaussian approaches to this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application and utility of multivariate data analysis techniques to synthetic aperture radar (SAR) automatic target recognizer (ATR) analysis and design is demonstrated on synthetically generated image data sets from the Xpatch scattering prediction code. The multivariate techniques and tools demonstrated include sampling interval estimation, intrinsic dimensionality estimation, nonparametric Bayes error estimation for performance evaluation, and estimation of the number of Gaussian modes that approximate the data sets. The utility of these techniques and tools to SAR ATR analysis and design are elucidated through quantitative results and discussions. The analysis techniques and tools discussed are enhancements of earlier ones that have been successfully applied to data sets consisting of a small number of samples of moderate dimensionality. References are given to those earlier reports that describe these methods, their theory, and earlier results. This paper focuses on the analysis and results of the enhanced methods and tools as applied to SAR data sets consisting of a small number of samples of large dimensionality. A considerable synergy of these combined multivariate statistical tools and image simulation tools is demonstrated. A general and powerful methodology for the quantification and evaluation of SAR ATR designs based upon a combination of these analysis and simulation tools is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given as input a large (roughly 1000 by 2000) high- resolution SAR image and a target description, the SAR automatic target recognition (ATR) problem is to locate all occurrences of the target in the image ATR algorithms are assessed by their detection probability, their false alarm probability, and their running time. We propose an algorithm with a high detection probability, a moderate false alarm probability, and a fast running time -- it completes in seconds on a conventional workstation. Such an algorithm is suitable for the first stage in a multi-stage ATR algorithm. Our main contribution is determining that template shape is the crucial factor in fast binary template matching. The methodology presented should prove useful in other situations where binary template matching is effective in detecting targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic target recognition classifier is constructed that uses a set of dedicated vector quantizers (VQs). The background pixels in each input image are properly clipped out by a set of aspect windows. The extracted target area for each aspect window is then enlarged to a fixed size, after which a wavelet decomposition splits the enlarged extraction into several subbands. A dedicated VQ codebook is generated for each subband of a particular target class at a specific range of aspects. Thus, each codebook consists of a set of feature templates that are iteratively adapted to represent a particular subband of a given target class at a specific range of aspects. These templates are then further trained by a modified learning vector quantization (LVQ) algorithm that enhances their discriminatory characteristics. A recognition rate of 69.0 percent is achieved on a highly cluttered test set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fundamental problem in computer vision is establishing correspondence between features in two images of the same scene. The computational burden in this problem is solving for the optimal mapping and transformation between the two scenes. In this paper we present a sieve algorithm for efficiently estimating the transformation and correspondence. A sieve algorithm use approximations to generate a sequence of increasingly accurate estimates of the correspondence. Initially, the approximations are computationally inexpensive and are designed to quickly sieve through the space of possible solutions. As the space of possible solutions shrinks, greater accuracy is required and the complexity of the approximations increases. The features in the image are modeled as points in the plane, and the structure in the image is a planar graph between the features. By modeling the object in the image as a planar graph we allow the approximations to be designed with point- set matching algorithms, geometric invariants, and graph- processing algorithms. The sieve algorithm is demonstrated on three problems. The first is registering images of muscles taken with an electron microscope. The second is aligning images of geometric patterns taken with a charged- couple device (CCD) camera. The third is recognizing objects taken with a CCD camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design of an automatic target recognition (ATR) system that is part of a hybrid system incorporating some domain knowledge. This design obtains an adaptive trade-off between training performance and memorization capacity by decomposing the learning process with respect to a relevant hidden variable. The probability of correct classification over 10 target classes is 73.4%. The probability of correct classification between the target- class and the clutter-class (where clutters are the false alarms obtained from another ATR) is 95.1%. These performances can be improved by reducing the memorization capacity of this system because its estimation shows that it is too large.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future wide-area surveillance systems mounted on unmanned air vehicles (UAVs), such as Tier II+, will be capable of collecting SAR imagery at prodigious coverage rates (greater than 1.5 km2/sec, 1 m resolution). One important consideration for making such systems economically feasible is squeezing the large amount of SAR imagery through an available communications link. When the sensor platform is beyond the line of sight from the ground processing facility, it is highly desirable to transmit the imagery via a 1.5 megabit per second T1 satellite communications data link; it would be prohibitively expensive to ensure the availability of a wider bandwidth satcom link at any point on the globe. Use of a T1 link creates an onerous burden for SAR image compression algorithms. In the Tier II+) scenario, for example, use of a T1 link implies a compression rate of less than half a bit per pixel. In the longer term, systems will have greater coverage areas and higher resolution capabilities; the compression requirement will be substantially more severe. Conventional image compression algorithms are incapable of attaining the required compression while retaining the image fidelity required for processing at the ground station. Clipping service is a system concept that reduces communication requirements by using automatic target detection and recognition (ATD/R) algorithms onboard the UAV. The ATD/R algorithms identify regions of interest in the collected imagery. In the regions of interest, the imagery is transmitted with highest fidelity. In other areas, the imagery is transmitted with less fidelity, thereby reducing the communication bandwidth required. In this paper, we describe a multiple-resolution clipping service system. In this system, regions of interest are identified by ATD/R algorithms. The regions of interest are transmitted at the finest resolution achievable by the sensor; elsewhere, imagery is transmitted with reduced resolution and reduced data rate. The system utilizes a multiple-resolution image formation algorithm to reduce computational load: ATD/R algorithms are applied to coarse resolution imagery; the imagery is subsequently processed to fine resolution imagery only where targets are likely to be present. This reduces computation because only a fraction of the imagery is processed to fine resolution. In the paper, we determine the communication requirements for the multiscale system assuming Tier II+ parameters. We demonstrate that it is feasible to transmit Tier II+ imagery via a T1 data link using the clipping service concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an efficient multiscale approach for the segmentation of natural clutter, specifically grass and forest, in synthetic aperture radar (SAR) imagery. This method exploits the coherent nature of SAR sensors. In particular, we exploit the characteristic statistical differences in imagery of different clutter types, as a function of scale, due to radar speckle. We employ a recently introduced class of multiscale stochastic processes that provide a powerful framework for describing random processes and fields that evolve in scale. We build models representative of each category of clutter of interest (i.e., grass and forest), and use these models to segment the imagery into these two clutter classes. The scale- autoregressive nature of the models allows extremely efficient calculation of the relative likelihoods of different clutter classifications for windows of SAR imagery, and we use these likelihoods as the basis for classifying image pixels and for accurately estimating forest-grass boundaries. We evaluate the performance of the technique by testing it on 0.3 meter SAR data gathered with the Lincoln Laboratory millimeter-wave SAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objectives of the current research in target recognition are to determine techniques for understanding the nature and special features of a target and use those to develop specific identification techniques. Bayesian networks have received much attention as an efficient way of combining evidences from different sources and reasoning under uncertainty. For target recognition, a Bayesian network built from the models involves both discrete and continuous variables. In this paper, an efficient algorithm based on stochastic simulation is proposed which has the following important features: (1) it can handle a generic network with non-linear, non-Gaussian, discrete-continuous, and arbitrary topology; (2) it can pre-compute and store evidence likelihood functions for a set of Bayesnets in the library; and (3) it can efficiently compute the results incrementally with the capability of cache. A method to construct a Bayesian network from a given training database is also introduced. Simulation examples with SAR data for ATR are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multilayer perceptrons (MLP) have been widely applied to pattern recognition. It is found that when the data has a multi-modal distribution, a standard MLP is apt to local minima and a valid neural net classifier is difficult to obtain. In this paper, we propose a two-phase learning modular (TLM) neural net architecture to tackle the local minimum problem. The basic idea is to transform the multi- modal distribution into a known and more learnable distribution before using a global MLP to classify the data. We applied the TLM to the inverse synthetic aperture radar (ISAR) automatic target recognition (ATR), and compared its performance with that of the MLP. Experiments show that the MLP's learning often leads to a fatal minimum if its net size or the initial point is not chosen properly. Its performance depends strongly on the number of training samples as well as the architecture parameters. On the other hand, the TLM is much easier to train and can yield good recognition accuracy, at least comparable to that of the MLP. In addition, the TLM's performance is more robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multisensor feature-based fusion approach to target recognition using a framework of model-theory is proposed. The best discrimination basis algorithm (BDBA) based on the best basis selection technique and the sensory data fusion system (SDFS) based on logical models and theories are applied for feature extraction. The BDBA selects the most discriminant basis. The SDFS first selects features, which are interpretable in terms of symbolic knowledge about the domain, from the most discriminant basis determined for each sensor separately. Then, it fuses these features into one combined feature vector. The SDFS uses formal languages to describe the domain and the sensing process. Models represent sensor data, operations on data, and relations among the data. Theories represent symbolic knowledge about the domain and about the sensors. Fusion is treated as a goal-driven operation of combining languages, models, and theories related to different sensors into one combined language, one combined model of the world and one combined theory. The results of our simulations show that the recognition accuracy of the proposed automatic multisensor feature based recognition system (AMFRS) is better than the recognition accuracy of a system that performs recognition using most discriminant wavelet coefficients (MDWC) as features. The AMFRS utilizes a model-theory framework (SDFS) for feature selection, while MDWC are selected from all the most discriminant bases determined for each sensor using a relative entropy measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel goal-driven approach for designing a knowledge-based system for information extraction and decision-making for target recognition. The underlying goal-driven model uses a goal frame tree schema for target organization, a hybrid rule-based pattern- directed formalism for target structural encoding, and a goal-driven inferential control strategy. The knowledge-base consists of three basic structures for the organization and control of target information: goals, target parameters, and an object-rulebase. Goal frames represent target recognition tasks as goals and subgoals in the knowledge base. Target parameters represent characteristic attributes of targets that are encoded as information atoms. Information atoms may have one or more assigned values and are used for information extraction. The object-rulebase consists of pattern/action assertional implications that describe the logical relationships existing between target parameter values. A goal realization process formulates symbolic patten expressions whose atomic values map to target parameters contained a priori in a hierarchical database of target state information. Symbolic pattern expression creation is accomplished via the application of a novel goal-driven inference strategy that logically prunes an AND/OR tree constructed object-rulebase. Similarity analysis is performed via pattern matching of query symbolic patterns and a priori instantiated target parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion strategies, conceived for deployment in multi-sensor environments wherein individual sensors or information sources offer only binary decisions, are presented. Alternative strategies are analyzed to derive a relative assessment of their application potential. The analysis takes into account the a priori probabilities or likelihoods of the different decision choices in the environment in developing estimates of the processing demands corresponding to these strategies. This brings into focus the feasibility of designing the system to be adaptive to take full advantage of the information about the relative likelihoods of the different decision choices facing the processor as the information evolves during the specific instance of the decision making process. Strategies, which are insensitive to the relative probabilities of the alternative decision choices, are also brought out under this analysis. The relative merits of these alternatives, their preferred domains in the class a priori probability space, and their relevance to different application environments are discussed to identify the natural domains of application of these strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many previous approaches to the Neyman-Pearson constrained fusion of statistically independent sensor decisions have concentrated on optimizing system performance at the sensors or at the fusion center but not both. This paper discusses a simple method for optimizing decision fusion system performance when the thresholds at the sensors and at the fusion center are allowed to vary. Two methods for optimizing system performance are investigated. The first method seeks to optimize system performance by selecting the local minimum probability of error thresholds at the sensor followed by an optimization of the fusion center weights. The second method is similar to the first, except it examines all possible combinations of the best two local minimum probability of error thresholds. Simulation results are used to show that while these two methods do not always provide an optimal solution, they are sufficiently accurate in many applications. Both methods are easily implemented and may be useful for real-time applications in which exhaustive computation of sensor operating points is not feasible, or for providing an initial solution for other more sophisticated methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In several recent papers we have demonstrated that classical single-sensor, single-source statistics can be directly extended to the multisensor, multisource case. The basis for this generalization is a special case of random set theory called 'finite-set statistics,' which allows familiar statistical techniques to be directly generalized to data fusion problems. The emphasis of previous papers has been on multisensor, multitarget detection, classification, and localization -- especially both parametric and nonparametric point estimation (MLE, MAP, and reproducing-kernel estimators). However, during the last two decades I.R. Goodman, H.T. Nguyen and others have shown that several basic aspects of expert-systems theory -- fuzzy logic, Dempster-Shafer evidential theory, and rule-based inference -- can be subsumed within a completely probabilistic framework based on random set theory. The purpose of this paper is to show that this body of research can be rigorously integrated with multisensor, multitarget estimation using random set theory as the unifying paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a series of prior papers Kadar et al have designed, developed and experimentally evaluated a family of centralized detection algorithms for multi-frequency radars operating in various degrees of clutter. Previously, two centralized fusion algorithms, called non-coherent integration (NCI) and T-squared (TSQ), followed by adaptive constant false alarm (ACFAR) post-processing were evaluated. In this paper, an optimum adaptive CFAR version of the T- squared algorithm, called ATSQ, is developed. The performance of ATSQ is compared with the NCI and TSQ algorithms using measured data in clutter. The fusion performance comparisons are presented in terms of receiver operating characteristics (ROC). It is shown that the new ATSQ algorithm is robust to changes of the background clutter. Its ROC performance is near the optimum achievable, and is significantly better than either NCI or TSQ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a series of papers Thomopoulos and Okello have designed and evaluate a robust CFAR detector (code-named RobCFAR) for multi-frequency radar data. Validation of the detector/fuser was done using experimental data from the Rome Lab Predetection Fusion Program. In this paper the optimal centralized ad distributed detectors for the same multi- frequency radar data are developed and their performance is compared with that of the RobCFAR detectors. Several problems occur due to the necessity of on-line evaluation of data statistics. These problems are addressed and a comparison is made between the optimal and CFAR data fusion techniques using the experimental data from the Rome Lab Predetection Fusion Program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A much needed capability in today's tactical Air Force is weapons systems capable of precision guidance in all weather conditions against targets in high clutter backgrounds. To achieve this capability, the Armament Directorate of Wright Laboratory, WL/MN, has been exploring various seeker technologies, including multi-sensor fusion, that may yield cost effective systems capable of operating under these conditions. A critical component of these seeker systems is their autonomous acquisition and tracking algorithms. It is these algorithms which will enable the autonomous operation of the weapons systems in the battlefield. In the past, a majority of the tactical weapon algorithms were developed in a manner which resulted in codes that were not releasable to the community, either because they were considered company proprietary or competition sensitive. As a result, the knowledge gained from these efforts was not transitioning through the technical community, thereby inhibiting the evolution of their development. In order to overcome this limitation, WL/MN has embarked upon a program to develop non-proprietary multi-sensor acquisition and tracking algorithms. To facilitate this development, a testbed has been constructed consisting of the Irma signature prediction model, data analysis workstations, and the modular algorithm concept evaluation tool (MACET) algorithm. All three of these components have been enhanced to accommodate both multi-spectral sensor fusion systems and the there dimensional signal processing techniques characteristic of ladar. MACET is a graphical interface driven system for rapid prototyping and evaluation of both unitary and fused sensor algorithms. This paper describes the MACET system and specifically elaborates on the three-dimensional capabilities recently incorporated into it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a project that was sponsored by the U.S. Army Space and Strategic Defense Command (USASSDC) to develop, test, and demonstrate sensor fusion algorithms for target recognition. The purpose of the project was to exploit the use of sensor fusion at all levels (signal, feature, and decision levels) and all combinations to improve target recognition capability against tactical ballistic missile (TBM) targets. These algorithms were trained with simulated radar signatures to accurately recognize selected TBM targets. The simulated signatures represent measurements made by two radars (S-band and X- band) with the targets at a variety of aspect and roll angles. Two tests were conducted: one with simulated signatures collected at angles different from those in the training database and one using actual test data. The test results demonstrate a high degree of recognition accuracy. This paper describes the training and testing techniques used; shows the fusion strategy employed; and illustrates the advantages of exploiting multi-level fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many commercial and military activities such as manufacturing, robotics, surveillance, target tracking and military command and control, information may be gathered by a variety of sources. The types of sources which may be used cover a broad spectrum and the data collected may be either numerical or linguistic in nature. Data fusion is the process in which data from multiple sources are combined to provide enhanced information quality and availability over that which is available from any individual source. The question is how to assess these enhancements. Using the U.S. JDL Model, the process of data fusion can be divided into several distinct levels. The first three levels are object refinement, situation refinement and threat refinement. Finally, at the fourth level (process refinement) the performance of the system is monitored to enable product improvement and sensor suite management. This monitoring includes the use of measures of information from the realm of generalized information theory to assess the improvements or degradation due to the fusion processing. The premise is that decreased uncertainty equates to increased information. At each level, the uncertainty may be represented in different ways. In this paper we give an overview of the existing measures of uncertainty and information, and propose some new measures for the various levels of the data fusion process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, a number of systems for determining or verifying the identity of an individual have been developed that rely on a single, intricate identifying feature such as a fingerprint or the retina of an eye. The large amount of detail required for such systems generally complicates sensing and necessitates a certain amount of direct interaction with human users. Although current systems work reasonably well, it is advantageous to explore new techniques that reduce the amount of interaction required and minimize the possibility of deception. The development of a standoff (i.e., no physical contact) biometric identification system capable of quickly determining or verifying the identity of an individual with a low probability of error is described. The low probability of error is obtained by fusing coarse features remotely acquired from the face, hand and voice. Individually, these features provide inadequate error performance, however, complementary information obtained by fusing or combining the features in a higher dimensional feature space enables reliable identity determination. The use of coarse features simplifies the remote sensing requirements, reduces the computing power required for feature extraction and minimizes human interaction with the system. The simultaneous use of multiple features from multiple sensors lessens the possibility of deception. A description of the system is presented together with preliminary performance results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates an approach to sensor scheduling and sensor management which effectively deals with the search/track decision problem. Every opportunity a sensor has to sense the environment equates to a certain amount of information which can be obtained about the state of the environment. A fundamental question is how to use this potential information to manage a suite of sensors while maximizing one's net knowledge about the state of the environment. The fundamental problem is whether to use one's resources to track targets already in track or to search for new ones. Inherent in this search/track problem is the further decision as to which sensor to use. A computer model has been developed that simulates a modest multiple sensor, multiple threat scenario. Target maneuvers are modeled using the Singer Model for manned maneuvering vehicles. Each sensor's capabilities and characteristics are captured in the model by converting their energy constraints to a probability of detecting a target as a function of range and field of view (beamwidth). The environment is represented by a probability distribution of a target being at a given location. As the environment is sensed and targets are detected, the environment's probability distribution is continually updated to reflect the new probability state of the environment. This probability state represents the system's best estimate about the location of all targets in track and the probable location of, as yet undetected, targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new fuzzy logic approach for solving the data association problem typically encountered in the application of target tracking. A single massive target maneuvering in a heavily-cluttered underwater environment is considered. The proposed fuzzy data association (FDA) approach is combined with an interacting multiple model (IMM) filter. The resultant IMM-FDA tracking algorithm is applied to estimate the state of the maneuvering target, and its performance is compared to that of a combination of an IMM filter and the probabilistic data association (PDA) scheme. The obtained results indicate that the IMM-FDA significantly outperforms the IMM-PDA at the expense of requiring more computational cost and introducing a short processing lag.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multiple radar tracking environment, measurements form different sensors observing the same target or track are required to be combined optimally in order to provide accurate information. One problem that has to be overcome before a weighted combination of the measurements can be made is the possible difference in scanning periods used by the sensors. The different scanning periods produce different resolutions that must be reconciled before the data are fused. Iterated function systems (IFS) have been used successfully for interpolation and data compression. When a measured track is to be fused with another of different resolution, the underlying problem is one of accurate interpolation. Tracks, be they linear or curvilinear, have certain amount of self-similarity as a geometrical object, just as natural coastlines are found to be fractal. Linear and piece-wise linear IFS have been shown to provide excellent interpolation and compression even for non-fractal objects. In this work, we report two interpolation schemes based on liner IFS for tracks measured at different resolutions. Simulations using linear and curvilinear tracks are performed and the results are compared to those using linear interpolations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Jindalee Operational Radar Network (JORN) consists of a network of overlapping skywave over-the-horizon radars (OTHR) which will provide surveillance of the air space and ocean off the northern coast of Australia. An OTHR can yield multiple tracks from a single target as a result of the complex structure of the ionosphere which varies spatially and temporally. Overlapping OTHRs, microwave radars and other sensors can simultaneously form tracks on the same targets. Consequently there is a heavy burden on operators to associate and combine those tracks which are derived from the same target. The broad objective of this work is to perform the fusion of OTHR tracks with tracks from other sources such as microwave radars and targets reporting Global Positioning System (GPS) data. An efficient algorithm has been developed which can incorporate advice about the propagation paths through the ionosphere and produce a list of associations of OTHR tracks and non-OTHR tracks. This algorithm is a generalization of a previously developed algorithm for associating tracks from multiple OTHRs. It can operate in real-time and allows for the possibility of refining the ionospheric advice system based on ground registered tracks. A brief description of the association algorithm is reported with discussions on the method of determining the closeness of tracks, the management of uncertainty in the track state, and the selection of associated tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design of a fuzzy controller requires specification of both membership functions and decision rules. Specification of membership function for a fuzzy logic controller has been an important issue. The traditional way of selecting membership functions has been, in most cases, an adhoc procedure. In this paper, an optimization algorithm based on simulated annealing for designing fuzzy membership functions for fuzzy controllers is introduced. An optimization algorithm for designing the membership functions and fuzzy rule base for a fuzzy controller is presented. An optimal fuzzy controller using proposed technique for backing up a truck problem is designed and implemented, in which optimal fuzzy membership functions and fuzzy rules are designed. Also, a neural network controller for the same truck problem is designed to compare our results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a fuzzy logic based approach to the cooling of laser materials is presented. The controller design is based on the performance characteristics of commercially available thermoelectric coolers. Simulation results are presented and discussed. The feasibility of implementing such a controller is evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the development of an automatic focus control unit (AFCU) for optical tracking instrumentation at White Sands Missile Range, New Mexico. The telescope system is known as the distant object attitude measurement system (DOAMS) and is used for optical data collection including attitude and miss-distance information. The AFCU will be given only target range and will provide a highly accurate focus. Fuzzy logic was chosen as the control method for this project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach to the identification and control of dynamical systems by means of evolved radial basis function neural networks (ERBFNs). Traditionally, radial basis function networks (RBFNs) parameters which are used for identification and control are fixed beforehand by a trial-and-error process. This process consists of finding structural and training parameters. Once these parameters are fixed the only parameters that remain to be determined are network weights. In general, the weights are adjusted using a gradient approach so that network output asymptotically follow the plant output. In this paper a new approach to the selection of structural and training parameters is introduced. A hybrid system is proposed which uses an evolutionary algorithm to select optimum structural parameters and uses the LMS algorithm to adjust network weights. In this context, RBFN parameters such as basis function centers, widths and training parameters are chosen at random and adjusted by an evolutionary algorithm, throughout the identification and control process. Experimental results show that the system is able to effectively identify and control dynamical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimation of the direction-of-arrival (DOA), also known as direction-finding (DF) problem, has been an active research area for some time. While one DOA estimation method may be better than another depending on the application, these methods can be categorized into either subspace decomposition methods or beamforming methods. Subspace decomposition methods are usually known to provide higher resolution but most of them assume relatively high signal to noise ratio. For low array-signal-to-noise-ratio (ASNR), however, their performance degenerates in a similar way as conventional beam forming methods do. In this paper, we introduce a new method which we refer to as 'MaxMax' method for ASNR below zero. The new method does not depend entirely on either the subspace decomposition technique or the conventional beamforming technique and is attractive for extremely low ASNR environment with small number of sensors at the price of higher computational complexity. Its performance is superior to the others for multipath signals for the same number of sensors. The number of signals need not be known and more than M-1 signals can be resolved where M is the number of sensors. The increased computational complexity can be reduced through parallel processing implemented on massively parallel computers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents algorithms for the implementation of Gabor transform in parallel MIMD multicomputer networks. The discrete Gabor transform algorithms are based on two techniques. The first method computes the coefficients using the discrete Zak transform which can be implemented using fast Fourier transforms. The second method computes the Gabor coefficients based on an optimization criterion, which is the minimization of the difference vector between the original signal and the signal that is to be reconstructed from the coefficients. The parallel algorithms are developed for both forward and inverse Gabor transforms based on data- flow in the computations and are independent of the network structure. The algorithms are modular and are designed to minimize communications that are inherent in MIMD systems. Some results of the parallel implementation of the algorithms in hypercube transputer networks is presented. The algorithms can be used for implementation in any MIMD multicomputer network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how to develop a hybrid formulation for calculating the scatter signature of various shaped targets. Computational electromagnetics (CEM) is a vital tool used to treat many problems arising in a wide variety of system applications. These system application areas include communications, radar, biological modeling, high speed circuitry, surveillance system analysis and design, antenna array technology, antenna platform interaction, fusing studies against low observable vehicles, and low observable vehicle (land, air, and sea) development. These are some of the major areas of importance in the developmental technology of today. Powerful tools and algorithms result from this work that can make the computation of very tedious and complex research and engineering problems/projects involving such technology as electromagnetics challenging and exciting. The hybridization of the fast multiple method (FMM) and the physical optics method (POM) is described, and the results for the hybridization of FMM-POM and the individual FMM and POM implemented on a parallel computer are presented and compared. The FMM-POM hybridization is an optimization for best accuracy and minimum computational operations(time). Individually, the FMM is most accurate and the POM is least accurate; this is true most of the time, except for simple geometrical surfaces when the accuracy is approximately equal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A relative entropy based approach to image thresholding was proposed recently by Chang et al. They demonstrated that this method is successful for image thresholding. Relative entropy is a member of the class of Ali-Silvey distance measures. In this paper we generalize the relative entropy based approach and present image thresholding algorithms based on the class of Ali-Silvey distance measures. A number of members of this class are selected and used for implementation in image thresholding algorithms. Performance is evaluated by applying these algorithms to several images and comparing them to a few histogram based thresholding methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image sensor fusion techniques such as model supported exploitation require accurate image support data to ensure accurate image-to-image or image-to-site model registration. Photogrammetric control of imagery is currently a time consuming process but necessary for these applications. For areas which are exploited repeatedly, a database of control features can be built and used by an automatic algorithm to control new images. The algorithm automatically locates the control features in the images and uses the resulting correspondences to perform a rigorous adjustment of the image support data which accurately ties the image to ground coordinates. Other source data referenced to ground coordinates is by association registered to the imagery and can be used to support sensor/data fusion algorithms. The approach for creating, maintaining and applying the control features is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Certain effective object recognition schemes involve the location of various distinguishing object components, extraction of component features, and the recognition based on these features. For certain classes of object recognition problems, a critical part is the automatic location of the object components. A common method for locating the object components involves the correlation of point to point or area to area. This correlation procedure can be relatively computationally expensive. This paper develops a fast, stable fuzzy approach for locating object components. The effectiveness of the approach is illustrated by an application to the human face recognition problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is presented for modeling and identifying textured images. The method is based on the use of iterated function systems, which are utilized for representing the image, and also for getting characteristic measures for different textures. A set of this kind of functions is chosen such that each one of them keeps the relationship between a small region and a larger one, and the regions are selected so that the smaller is inside the larger, for taking the measures about self-similarity properties in that part of the image. These measures are then translated to a feature map, like in the self-organizing map methods, to analyze them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracting satellite solar-array and main-body orientation vector information from optical imagery is an integral part of space object identification analysis. We describe a model-based image analysis system which automatically estimates the 3-D orientation vector of satellites by analyzing images obtained from ground-based optical telescopes. We adopt a two-step approach. First, pose estimates are derived from comparisons with a model database, and second, pose refinements are derived from photogrammetric information. The model database is formed by representing each available training image by a set of derived geometric primitives. To obtain fast access to the model database and to increase the probability of early successful matching, a novel indexing method is introduced. We present our preliminary results, evaluate the overall performance of the technique, and suggest improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Engineers at Oak Ridge National Laboratory have been investigating the feasibility of computer-controlled docking in resupply missions, sponsored by the U.S. Army. The goal of this program is to autonomously dock an articulating robotic boom with a special receiving port. A video camera mounted on the boom provides video images of the docking prot to an image processing computer that calculates the position and orientation (pose) of the target relative to the camera. The control system can then move the boom into docking position. This paper describes a method of uniquely identifying and segmenting the receiving port from its background in a sequence of video images. An array of light- emitting diodes was installed to mark the vertices of the port. The markers have a fixed geometric pattern and are modulated at a fixed frequency. An asynchronous demodulation technique to segment flashing markers from an image of the port was developed and tested under laboratory conditions. The technique acquires a sequence of images and digitally processes them in the time domain to suppress all image features except the flashing markers. Pixels that vary at frequencies within the filter bandwidth are passed unattenuated, while variations outside the passband are suppressed. The image coordinates of the segmented markers are computed and then used to calculate the pose of the receiving port. The technique has been robust and reliable in a laboratory demonstration of autodocking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in passive and active imaging and non- imaging sensor technology offer the potential to detect weapons that are concealed beneath a person's clothing. Sensors that are discussed in this paper are characterized as either non-imaging or imaging. Non-imaging sensors include wide band radar and portal devices such as metal detectors. In general the strength of non-imaging sensors rest with the fact that they are generally inexpensive and can rapidly perform bulk separation between regions where persons are likely to be carrying concealed weapons and those regions that are likely to contain persons who are unarmed. The bulk process is typically accomplished at the expense of false alarm rate. Millimeter-wave (MMW), microwave, x-ray, acoustic, magnetic, and infrared (IR) imaging sensor technologies provide with greater certainty the means to isolate persons within a crowd that are carrying concealed weapons and to identify the weapon type. The increased certainty associated with imaging sensors is accomplished at the expense of cost and bulk surveillance of the crowd. CWD technologies have a variety of military and civilian applications. This technology focus area addresses specific military needs under the Defense Advanced Research Projects Agency's (DARPA) operations other than war/law enforcement (OOTW/LE). Additionally, this technology has numerous civilian law enforcement applications that are being investigated under the National Institute of Justice's (NIJ) Concealed Weapons Detection program. This paper discusses the wide variety of sensors that might be employed in support of a typical scenario, the strengths and weaknesses of each of the sensors relative to the given scenario, and how CWD breadboards will be tested to determine the optimal CWD application. It rapidly becomes apparent that no single sensor will completely satisfy the CWD mission necessitating the fusion of two or more of these sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel wideband millimeter-wave imaging system is presently being developed at Pacific Northwest National Laboratory (PNNL) that will allow rapid inspection of personnel for concealed explosives, handguns, or other threats. Millimeter-wavelength electromagnetic waves are effective for this application since they readily penetrate common clothing materials, while being partially reflected from the person under surveillance as well as any concealed items. To form an image rapidly, a linear array of 128 antennas is used to electronically scan over a horizontal aperture of 0.75 meters, while the linear array is mechanically swept over a vertical aperture of 2 meters. At each point over this 2-D aperture, coherent wideband data reflected from the target is gathered using wide-beamwidth antennas. The data is recorded coherently, and reconstructed (focused) using an efficient image reconstruction algorithm developed at PNNL. This algorithm works in the near-field of both the target and the scanned aperture and preserves the diffraction limited resolution of less than one-wavelength. The wide frequency bandwidth is used to provide depth resolution, which allows the image to be fully focused over a wide range of depths, resulting in a full 3-D image. This is not possible in a normal optical (or quasi-optical) imaging system. This system has been extensively tested using concealed metal and plastic weapons, and has recently been tested using real plastic explosives (C-4 and RDX) and simulated liquid explosives concealed on personnel. Millimeter-waves do not penetrate the human body, so it is necessary to view the subject from several angles in order to fully inspect for concealed weapons. Full animations containing 36 - 72 frames recorded from subjects rotated by 5 - 10 degrees, have been found to be extremely useful for rapid, effective inspection of personnel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated radar and ultrasound sensor, capable of remotely detecting and imaging concealed weapons, is being developed. A modified frequency-agile, mine-detection radar is intended to specify with high probability of detection at ranges of 1 to 10 m which individuals in a moving crowd may be concealing metallic or nonmetallic weapons. Within about 1 to 5 m, the active ultrasound sensor is intended to enable a user to identify a concealed weapon on a moving person with low false-detection rate, achieved through a real-time centimeter-resolution image of the weapon. The goal for sensor fusion is to have the radar acquire concealed weapons at long ranges and seamlessly hand over tracking data to the ultrasound sensor for high-resolution imaging on a video monitor. We have demonstrated centimeter-resolution ultrasound images of metallic and non-metallic weapons concealed on a human at ranges over 1 m. Processing of the ultrasound images includes filters for noise, frequency, brightness, and contrast. A frequency-agile radar has been developed by JAYCOR under the U.S. Army Advanced Mine Detection Radar Program. The signature of an armed person, detected by this radar, differs appreciably from that of the same person unarmed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define a variational method to perform frame-fusion. The process is in three steps: we first estimate the velocities and occlusions using optical flow and spatial constraint on the velocities based on the L1 norm of the divergence. We then collect non-occluded points from the sequence, and estimate their locations at a chosen time, on which we perform the fusion. From this list of points, we reconstruct the super-frame by minimizing a total variation energy which forces the super-frame to look like each frame of the sequence (after shifting) and select among the least oscillatory solutions. We display some examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new theoretic framework for combining sensor measurements, state estimates, or any similar type of quantity given only their means and covariances. The key feature of the new framework is that it permits the optimal fusion of estimates that are correlated to an unknown degree. This framework yields a new filtering paradigm that avoids all of the restrictive independence assumptions required by the standard Kalman filter, though at the cost of reduced rates of convergence for cases in which independence can be established.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We pursue the idea that recent 'decentralized' Kalman filter (KF) technology, by outfitting each participating imaging sensor with its own dedicated 2-D Kalman filter can be used as the basis of a sensor fusion methodology that allows a final collating filter to assemble the data from diverse imaging sensors of various resolutions into a single resulting image that combines all the available information (in analogy to what is already routinely done in multisensor Navigation applications). The novelty is in working out the theoretical details for 2-D filtering situations while assuming that the image registration problem has already been independently handled beforehand. We synchronize frame size and location of pixels of interest to be comparably located with same 'raster scan' speed and size used for each to match up for different sensors. Rule for linear Kalman filters with only Gaussian noises is that the combining of underlying measurements or sensor information can only help and never hurt. We interpret this approach as using several common views of the same scene, as instantaneously obtained from different sensors, all being stacked up vertically one on top of the other, each with its own local 2-D Kalman-like image restoration filter proceeding to raster scan (in multi-layer sync). Then apply the multi-filter combining rules from decentralized filtering to the bunch to obtain a single best estimate image as the resulting output as a convenient methodology to achieve sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A self-organizing k-means algorithm to classify the inputs (data) into classes is presented. This algorithm provides solutions to the problems that the k-means classification algorithm faces. The k-means classification algorithm has the problem of selecting the threshold(s). It also requires that the number of classes be known a priori. This algorithm forms clusters, removes noise, and is trained without supervision. The clustering is done on the basis of the statistical properties of the set of input data. The algorithm consists of two phases. The first phase is similar to the Carpenter/Grossberg classifier, and the second phase is a modified version of the k-means algorithm. An example is given to illustrate the application of this algorithm and to compare this algorithm with the k-means algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a simple configurable systolic array structure for signal processing is proposed. This array is configurable not only for its dimension, but also for different kinds of signal processing algorithms. The basic connection pattern used is a spiral structure. Through a carefully designed switch, a desired array dimension can be easily configured. The configurability for various kinds of signal processing operations is achieved by designing a simple configurable processing element whose configuration is set by a control signal. This systolic array can be applied to the following areas: matrix multiplication, FIR filtering, convolution and correlation, DFT and IDFT. Examples are included to demonstrate how to use the systolic array for each of these operations. With some modification of the PE structure, more applications in signal processing could be expected. This systolic array design gives a good trade-off among simplicity, modularity, flexibility and configurability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of spiral systolic arrays (Sas) with asynchronous controls for efficient and flexible implementation of linear phase FIR filters is presented. In this approach, (1) reduction of filter due to symmetry, (2) conversion of sequential input signal into input blocks by means of a spiral systolic mesh that is (a) suitable for highly parallel processing and (b) flexible for enabling various array sizes, and (3) making data streams independent of computations executed in each processor will collectively minimize computation time. The SA architecture processes the input signal row by row, and eliminates the complex shift register organization of the traditional FIR realization. Incorporated in this design are maximum parallelism and pipelinability, trade-off among computations, communications, and memory. Moreover, the systolic array will use simple local interconnection without undesirable properties such as preloading input data or global broadcasting. The key component of the asynchronous spiral SA is a communication protocol that controls input data flow properly and efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a system to distinguish objects from measurements of their radar backscatter signals has been a topic of considerable investigation. In the identification of a particular target out of a library of possible targets, the difficulty is that the radar signals cannot be fed rawly into a classifier. Some signal processing has to be done to generate the signal features for target identification. In this paper, multifractal geometry is applied to address the practical issue of discrimination between fishing boat, growlers (small pieces of glacial ice) and sea scattered signals, which is important for search-and-rescue operation. An efficient box-counting method is used to compute the generalized dimension and the multifractal spectrum of different targets and sea scattered signals. In an effort to support our study, X-band radar measurements were collected and analyzed to determine the separability of sea surface targets and sea scattered signals using the multifractal geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to the identification of dynamical systems by means of evolved neural networks is presented. We implement two functional neural networks: polynomials and orthogonal basis functions. The functional neural networks contain four parameters that need to be optimized: the weights, training parameters, network topology and scaling factors. An approach to the solution of this combinatorial problem is to genetically evolve functional neural networks. This paper presents a preliminary analysis of the proposed method to automatically select network parameters. The networks are encoded as chromosomes that are evolved during the identification by means of genetic algorithms. Experimental results show that the method is effective for the identification of dynamical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to the extraction of the polygonal approximation is presented. The method obtains a smaller set of the important features by means of an evolutionary algorithm. A genetic approach with some heuristics, improves contour approximation search by starting with a parallel search at various points in the contour. The algorithm uses genetic algorithms to encode a polygonal approximation as a chromosome and evolve it to provide a polygonal approximation. Experimental results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a model based approach to the detection of objects of known geometry in imagery. The algorithm is designed to be used as a front end to an automatic target recognizer. The algorithm uses probes to extract features from the neighborhood of the known boundary of the object. Neural nets are used to classify the probes as corresponding to background or an object/background boundary by estimating a posteriori probabilities. The paper investigates a number of probe geometries and nonparametric probe statistics. Experimental results demonstrate the utility of these methods in detecting ground vehicles in natural backgrounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.