PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Automatic target recognition (ATR) performance based on forward-looking infrared (FLIR) and laser radar (LADAR) image sensors is studied for the recognition of ground-based targets with unknown random pose. High signal-to-noise ratio results are obtained by using the Laplace approximation to simplify nuisance integrals which appear in Bayesian likelihood-ratio calculations. This analytical approach applied to simple blocks-world target models and statistical sensor models provides insight into how target and sensor parameters affect recognition performance. The Laplace method used in this paper can be applied to obtain expressions for the probability of error in binary recognition as well as more general situations such as target detection and M-ary recognition. These theoretical results are compared with computer-simulated calculations of the probability of error in binary recognition and sensor fusion scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The volume of data that must be processed to characterize the performance of target detection algorithms over a complex parameter space requires automated analysis. This paper discusses a methodology for automatically scoring the results from a diversity of detectors producing several different forms of detected regions. The ability to automatically score detector outputs without using full target templates or models has advantages. Using target descriptors-primarily target sizes and locations-reduces the computational cost of matching detected regions against truthed targets in various scenes. It also diminishes the size of and the difficulty of creating an image-truth database. Theoretical considerations are presented. Overcoming issues associated with using limited truth information is explained. Concepts and use of the Auto-Score package are also discussed. The performances of several different laser radar (LADAR) target detectors, applied to imagery containing scenes with targets and both natural and man-made clutter, have been characterized with the aid of Auto-Score. Automatic scoring examples are taken from this domain. However, the scoring process is applicable to detectors operating on other problems and other kinds of data as well. The target-descriptor scoring concept and Auto-Score implementation were originated to support the development of a configurable automatic target recognition (ATR) system for LADAR data, under the auspices of the Office of Naval Research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object-image relations (O-IRs) provide a powerful approach to performing detection and recognition with laser radar (LADAR) sensors. This paper presents the basics of O-I relations and shows how they are derived from invariants. It also explains and shows results of a computationally efficient approach applying covariants to 3-D LADAR data. The approach is especially appealing because the detection and segmentation processes are integrated with recognition into a robust algorithm. Finally, the method provides a straightforward approach to handling articulation and multi-scale decomposition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mobile Target Acquisition System (MTAS) is an automatic target recognition (ATR) system developed by the Naval Air Warfare Center Weapons Division, China Lake, CA, under funding by the Office of Naval Research (ONR) to detect and identify mobile target laser detection and ranging (LADAR) range signatures. The primary objective was to achieve high correct system identification rates for range signatures of relatively low numbers of pixels on target and, at the same time, maintain a low system identification false alarm rate. MTAS met this objective by stressing conservation and efficient exploitation of target information at all levels of processing. Adaptive noise cleaning conserves target information by filtering pixels only when the pixel and its neighbors satisfied the criteria for range dropouts. The MTAS detector holds false alarms to a low level by convolving synthetic templates with the gradient of the range image and fusing the resulting correlation surface with a blob size filter. Mobile target identification fuses 2-D silhouette shape with 3-D (21/2-D) volumetric shape where the mixture of 2- and 3-D shapes is controlled by a single parameter. The match between the measured LADAR range signature and the synthetic range template efficiently and effectively exploits scarce target information by including all target and template pixels in the Fuzzy Tanimoto Distance similarity measure. This system has successfully detected and identified measured mobile LADAR target signatures with 200 pixels on target and greater with a low confuser identification rate and no system clutter identification false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation filters are ideally suited for recognizing patterns in 3D data. Whereas most model-based techniques tend to measure the overall dimensions of objects and their larger features, correlation filters can readily exploit intricate surface details, the gray values of surfaces as well as internal structure, if any. Thus correlation filters may be the preferred approach in scenarios when intensity and range data are both available, or when the internal structure of an object has been mapped. In this paper, we outline the development of filters for 3D data that we refer to as Volume Correlation Filters, illustrate their use with range images of an object, and outline future work for the development of 3D correlation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique is presented, based upon the sequential test of hypotheses, which combine the classical target recognition tasks of detection, classification and aimpoint selection into one operation. This technique exploits the fact that a sequential test of hypotheses converges, on the average, faster than a test of hypotheses based on a fixed number of samples. Results of this technique are illustrated with synthetic laser range images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will discuss the design of a hybrid fuzzy-neural classifier for fusion of range and intensity channels coming from a LADAR sensor. Fusion was performed on a feature rather than pixel level. Results will be compared between ATR performance with and with out fusion. Also, discussed in this paper is the use of genetic algorithms for the training and optimization of the ATR system with a limited set of ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of an ongoing captive flight test demonstration project at the Naval Air Warfare Center (NAWC), China Lake, CA, two Automatic Target Recognition (ATR) algorithms were implemented in hardware, integrated with the second in a series of ladar sensors, and flown aboard a T-39 aircraft against both fixed and stationary mobile target sites on the ranges of the Naval Air Weapons Station (NAWS), also at China Lake, CA. The first ATR algorithm was developed to recognize fixed targets and to select aim-points with a performance goal of a five pixel Circular Error Probability (CEP.) The second ATR algorithm was developed to detect stationary mobile targets, such as tanks and trucks. The performance goal for this algorithm was to achieve 90% probability of detection. Both of these algorithms operate by exploiting the very accurate 3D geometry provided by the ladar. This paper describes the 1999 and 2000 captive flight tests involving these two algorithms, including the flight tests themselves, the hardware implementations, and the resulting ATR performance. Additionally, the large ladar data set, collected during twenty-four two-hour flights, will be briefly described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an unsupervised scheme aimed at segmentation of laser radar (LADAR) imagery for Automatic Target Detection. A coding theoretic approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. MDL is used to penalize overly complex segmentations. The intensity data is modeled as a Gaussian random field whose mean and variance functions are piecewise constant across the image. This model is intended to capture variations in both mean value (intensity) and variance (texture). The segmentation algorithm is based on an adaptive rectangular recursive partitioning scheme. We implement a robust constant false alarm rate (CFAR) detector on the segmented intensity image for target detection and compare our results with the conventional cell averaging (CA) CFAR detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous approaches to segmentation exist requiring an evaluation technique to determine the most appropriate technique to use for a specific ladar design. A benchtop evaluation methodology that uses multiple measures is used to evaluate ladar-specific image segmentation algorithms. The method uses multiple measures along with an inter-algorithmic approach that was recently introduced for evaluating Synthetic Aperture Radar (SAR) imagery. Ladar imagery is considered to be easier to segment than SAR since it generally contains less speckle and has both a range and intensity map to assist in segmentation. A system of multiple measures focuses on area, shape and edge closeness to judge the segmentation. The judgement is made on the benchtop by comparing the segmentation to supervised hand-segmented images. To demonstrate the approach, a ladar image is segmented using several segmentation approaches introduced in literature. The system of multiple measures is then demonstrated on the segmented ladar images. An interpretation of the results is given. This paper demonstrates that the original evaluation approach designed for evaluating SAR imagery can be generalized across differing sensor modalities even though the segmentation and sensor acquisition approaches are different.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present an algorithm used to automatically register a sequence of ladar images taken from a sensor flying a known flight path . The registration is performed with no human operator intervention. The resulting mosaic is accurate to one pixel. The algorithmic approach was developed in such a way to allow for near real time processing. The initial method can be extended to registration of multiple views of a scene, with four degrees of freedom (translation and in plane rotation). In this work we will restrict ourselves to rigid body transformations. The registered mosaic is an important step in geolocation using a reference digital elevation map, and will be explored in future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection techniques play an important role in automatic target recognition (ATR) systems because overall ATR performance depends closely on detection results. A number of detection techniques based on infrared (IR) images have been developed using a variety of pattern recognition approaches. However, target detection based on a single IR sensor is often hampered by adverse weather conditions or countermeasures, resulting in unacceptably high false alarm rates. Multiple imaging sensors in different spectral ranges, such as visible and infrared bands, are used here to reduce such adverse effects. The imaging data from the different sensors are jointly processed to exploit the spatial characteristics of the objects. Four local features are used to exploit the local characteristics of the images generated from each sensor. A confidence image is created via feature-based fusion that combines the features to obtain potential target locations. Experimental results using two test sequences are provided to demonstrate the viability of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the automatic recognition of motorized vehicles in cluttered infrared images, this paper presents an approach for grouping multiple supervised neural networks into an efficient classifier, taking into account the variety of targets and background scenarios available in the imagery provided for the EUCLID project RTP 8.2. The proposed neural network architecture consists of a modular combination of several small multi-layer-perceptron neural networks. To take into account the false targets generated by the detection stage, along with the networks used for the discrimination between a target class and all the other classes of targets, auxiliary neural networks aim to separate targets from non-targets. For ambiguous situations it is also introduced an additional level of neural networks trained to discriminate sub-groups of classes that present similar features. Training and testing was performed using five classes of targets within cluttered environments: tanks, trucks, cars, airplanes and helicopters. Most of the data used was from real infrared imagery, although complementary synthetic target models were also introduced to test the validity of the presented approach in a wide variety of situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an underwater target identification algorithm capable of identifying various types of underwater targets, such as mines, under different environmental conditions pose many technical problems. Some of the contributing factors are: targets have diverse sizes, shapes and reflectivity properties. Target emplacement environment is variable; targets may be proud or partially buried. Environmental properties vary significantly from one location to another. Bottom features such as sand, rocks, corals, and vegetation can conceal a target whether it is partially buried or proud. Competing clutter with responses that closely resemble those of the targets may lead to false positives. All the problems mentioned above contribute to overly difficult and challenging conditions that could lead to unreliable algorithm performance with existing methods. In this paper, we developed and tested a shape-dependent feature extraction scheme that provides features invariant to rotation, size scaling and translation; properties that are extremely useful for any target classification problem. The developed schemes were tested on an electro-optical imagery data set collected under different environmental conditions with variable background, range and target types. The electro-optic data set was collected using a Laser Line Scan (LLS) sensor by the Coastal Systems Station (CSS), located in Panama City, Florida. The performance of the developed scheme and its robustness to distortion, rotation, scaling and translation was also studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we will discuss a novel technology, which we have recently developed for automatic target detection and recognition by polarimetric imaging system. This technology consists of an approach to non-cooperative small target detection that uses statistical techniques to exploit a target's Stokes vector IR signature. This is applicable to sensors whose signature measurements are sensitive to the polarization of the targets and their backgrounds. Fusion is achieved by constructing the joint statistical measures for the target's polarization states. Target polarization states are in terms of the intensity, percent of linear polarization, and the angel of polarization plane. Applications of the proposed approach, for military targets under variations in target geometry are made in terms of receiver operating characteristic condition curves. The new results, which have been obtained on data from the Air Force's IRMA polarimetric IR simulation tool, indicate the usefulness of polarimetric RI signatures for the automatic detection of small targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the automatic evaluation of segmentation algorithms; the application framework is automatic target recognition within the specific case of infrared images of military vehicles. The approach consists in approximating the edges with generic B-spline functions; since the problem stated like this is too general, we use a spline template which has to be matched with the approximation by using some distance minimization. The difficult points of the problem are the indexing of the edges (with respect to the spline parameter sequence), the design of the spline itself has it must fit some specific requirements and the choice of a distance which is robust against noise and minor shape modifications. We show that some noticeable improvements happen by indexing edges points according to their projection onto a model from available a priori information. We finally explain how this spline model will be used to assess the edge detection step in an automatic vehicle recognition task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recognition of bridge in long-range infrared images presents a number of problems due to the complexity of background, high noise interference, small size of target and low contrast between bridge and its surrounding water area. To counter these barriers, we have developed a new knowledge-based recognition algorithm. It first detects the candidate bridge sub-regions and then focus on them. According to the degree to which they match with our pre-built framework, different credits are given, so the false objects are excluded and eventually real target is found. The experimental results demonstrate our localized method is always superior to the traditional global algorithms adopted by most former researchers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a framework for predicting IR images of a target, in a partially observed thermal state, using known geometry and past IR images. The thermal states of the target are represented via scalar temperature fields. The prediction task becomes that of estimating the unobserved parts of the field, using the observed parts and the past patterns. The estimation is performed using regression models for relating the temperature variables, at different points on the target's surface, across different thermal states. A linear regression model is applied and some preliminary experimental results are presented using a laboratory target and a hand-held IR camera. Extensions to piecewise-linear and nonlinear models are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) systems typically include a front-end detector that searches the entire image and isolates those regions which potentially contain targets. These regions are further analyzed by more complex recognition algorithms. A good detector will find most of the targets with relatively few false alarms, at reasonable computational cost. We propose a set of neural-based target detectors consisting of an eigenspace transformation followed by a simple multilayer perceptron (MLP). These detectors may operate on either single band or multi-band forward-looking infrared (FLIR) input images. The eigenspace transformation is first derived from a training set via principal component analysis (PCA). This transformation layer is needed to reduce the dimensionality of the input images, while retaining those features that are critical to the detection task. A small number of the resulting projection values is then fed to the MLP that determines the likelihood of a given pixel location to be the center of a target. Experiments were conducted using a set of several hundred real FLIR images. The results indicate that the dualband input images do improve the performance of the detectors substantially.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Entropy-based divergence measures have shown promising results in many areas of engineering and image processing. In this paper, a new generalized divergence measure, divergence, is proposed. Some properties such as convexity and its upper bound are derived. Based on the Jensen-Renyi divergence, we propose a new approach to the problem of ISAR (Inverse Synthetic Aperture Radar) image registration. The goal is to estimate the target motion during the imaging time. Our approach applies Jensen-Renyi divergence to measure the statistical dependence between consecutive ISAR image frames, which would be maximal if the images are geometrically aligned. Simulation results demonstrate that the proposed method is efficient and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support Vector Machines (SVMs) are an emerging machine learning technique that has found widespread application in various areas during the past four years. The success of SVMs is mainly due to a number of attractive features, including a) applicability to the processing of high dimensional data, b) ability to achieve a global optimum, and c) the ability to deal with nonlinear data. One potential application for SVMs is High Range Resolution (HRR) radar signatures, typically used for HRR-based Automatic Target Recognition (ATR). HRR signatures are problematic for many traditional ATR algorithms because of the unique characteristics of HRR signatures. For example, HRR signatures are generally high dimensional, linearly inseparable, and extremely sensitive to aspect changes. In this paper we demonstrate that SVMs are a promising alternative in dealing with the challenges of HRR signatures. The studies presented in this paper represent an initial attempt at applying SVMs to HRR data. The most straightforward application of SVMs to HRR-based ATR is to use SVMs as classifiers. We experimentally compare the performance of SVM-based classifiers with several conventional classifiers, such as k-Nearest-Neighbor (kNN) classifiers and Artificial Neural Network (ANN) Classifiers. Experimental results suggest that SVM classifiers possess a number of advantages. For example, a) applying SVM classifiers to HRR data requires little prior knowledge of the target data, b) SVM classifiers require much less computation than kNN classifiers during testing, and c) the structure of a trained SVM classifier can reveal a number of important properties of the target data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-look adaptive weighting (MAW) is an adaptive beamforming method of improving target high range resolution (HRR) signatures for automatic target recognition (ATR) systems. The primary goal in developing MAW is the concern with improving the probability of correct classification (Pcc) and reducing the probability of false classification (Pfc) in ATR systems. An additional objective, driven by operational considerations, is to reduce the radar resources required to achieve a desired Pcc and Pfc. We have shown in previous HRR ATR studies on ground military targets that a significant classifier performance gain can be obtained if speckle noise and scintillation in HRR profiles are reduced through noncoherent averaging of multiple independent coherent processing intervals (CPIs) that are separated by small changes in azimuth angles. Given the advantage of using multi-look CPIs on HRR ATR performance, we have designed MAW specifically to take advantage of the multiple independent CPIs in forming the HRR profiles. From a radar resource perspective, for an HRR ATR system to be operationally useful, the system must operate at a signal-to-noise ratio (SNR) in the range of 20-25 dB. In this paper, we discuss the theoretical foundation underlying MAW and present corresponding MAW-processed HRR ATR results at 20-25db SNR compared against other image processing techniques such as weighted fast Fourier transform (FFT) and high definition vector imaging (HDVI). These results are based upon HRR profiles formed from synthetic aperture radar (SAR) images of targets taken from the high quality Moving and Stationary Target Acquisition and Recognition (MSTAR) data set. We also discuss the impact these image processing techniques have on the HRR ATR performance in terms of radar resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A (chi) 2 model for radar cross section (RCS) variability of High Range Resolution (HRR) measurements is validated using compact range data from the U.S. Army National Ground Intelligence Center (NGIC). It is shown that targets can be represented by a mean template and by a variance template, or in this case, an effective number of degrees-of-freedom for the (chi) 2-distribution. The analysis also includes comparison of the measured tails of the RCS distribution to that predicated by the (chi) 2-distribution. The likelihood classifier is obtained, and a Monte Carlo performance model is developed to validate the statistical model at the level of ATR performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A discrimination SNR for predicting classification performance is developed as an analogy to the RADAR equation that is used to predict detection performance. It assumes a statistical model for the target radar cross-section (RCS) and that the resulting likelihood classifier is employed. The relationship between the probability of classification errors and the dB value of the discrimination SNR is obtained. A specific form for the likelihood classifier and the discrimination SNR is developed assuming that the variability of the target RCS is described by a (chi) 2 - distribution. The form of this (chi) 2 - based classifier is novel and significantly different from the more common Gaussian based mean-square-error classifier. It is shown that the discrimination SNR has an intuitive interpretation in terms of the number of radar samples, the average contrast between targets and the contrast-noise. The use of this tool is illustrated using compact range High Range Resolution (HRR) Doppler measurements from the U.S. Army National Ground Intelligence Center (NGIC). The sensitivity of ATR performance to radar parameters is quantified using the discrimination SNR with gains measured in meaningful dB units.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquisition of full-polarimetric millimeter-wave, or microwave, moving target signature sets sufficient for developing ATR algorithms have proven to be costly and difficult to achieve operationally. Thorough investigations involving moving targets are often hindered by the lack of rigorously consistent signature data for a sufficient number of targets across requisite viewing angles, articulations and environmental conditions. Under the support of DARPA's TRUMPETS and AMSTE programs in conjunction with the US Army National Ground Intelligence Center, X-band far-field turntable signature data has been acquired on 1/16th scaled models of the Bradley and BTR-70 vehicles specifically constructed for moving target investigations using ERADS' 160 GHz fully polarimetric compact range. The tracks/wheels of the scale models were translated incrementally as the radar's transmit frequency was stepped across a 10.5 Ghz bandwidth. By acquiring a full frequency sweep at each track/wheel position with appropriate translation resolution, HRR RCS profiles of Doppler-shifted body/track components were generated. HRR profiles of the equivalent stationary vehicle were also generated for analysis using the vehicle's HRR profiles for any given track position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultra-wideband radar systems are feasible for extracting signature infor-ma-tion use-ful for target recognition purposes. An ultra-wideband radar system emits either an extremely short pulse, impulse, or a frequency modulated signal. The frequency content of the emitted signals is designed to match the size and kind of typical targets and environments. We investigate the backscattered echoes from selected targets that are extracted by a stepped-frequency continuous wave (SFCW) radar system playing the role of ground penetrating radar (GPR). The targets are metal and non-metal objects buried in dry sand. The SFCW radar transmits 55 different frequencies from 300 to 3,000 MHz in steps of 50 MHz. The duration of each frequency is about 100 µs, which means that each transmitted waveform has an extremely narrow band. The in-phase (I) sampled signals and quadrature-phase (Q) sampled signals give information of both the amplitude and phase of the signal returned from the target. As a result a complex-valued line spectrum of the target is obtained that can be used for synthesizing real-valued repetitive waveforms, using the inverse Fourier transform. We analyze synthe-sized back-scat-tered echoes from each target in the joint time-frequency domain us-ing a pseudo-Wigner distribution (PWD). A classification method that we developed previously using the fuzzy C-means clustering technique is then used to reduce the number and kind of fea-tures in the derived target signatures. Using a template for each member of the class the classifier decides the membership of a given target based on best fit of the templates measured by a cost function. We also address the problem of how to select suitable waveforms for the templates used by the classification algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the use of a nonlinear dimensionality reduction scheme for feature extraction applied to ISAR images of armored targets. The features are then used in a nearest-neighbor classifier to evaluate their utility in achieving classification performance that is robust to changes in exterior detail of vehicles (for example open or closed hatches and storage boxes etc.). In addition to robustness a classifier is desired to generalize and correctly classify an example of a class that was not present in the training process (for example if the training process represents the Main Battle Tank class with a T72 and a Chieftain, a successful classification is desired when the system is presented with a Challenger). The proportion of the original data structure that has been retained in the dimension reducing transformation is calculated through the use of a loss function. The structure preserving properties of a nonlinear projection using Radial Basis Functions are compared with a linear projection obtained from Principal Components Analysis. The data used are ISAR images of armored vehicles gathered under a range of vehicle configurations allowing tests of both robustness and generality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-look adaptive weighting (MAW) is an adaptive beamforming method of improving target high range resolution (HRR) signatures for automatic target recognition (ATR) systems. The primary goal in developing MAW is the concern with improving the probability of correct classification (Pcc) and reducing the probability of false classification (Pfc) in ATR systems. An additional objective, driven by operational considerations, is to reduce the radar resources required to achieve a desired Pcc and Pfc. We have shown in previous HRR ATR studies on ground military targets that a significant classifier performance gain can be obtained if speckle noise and scintillation in HRR profiles are reduced through noncoherent averaging of multiple independent coherent processing intervals (CPIs) that are separated by small changes in azimuth angles. Given the advantage of using multi-look CPIs on HRR ATR performance, we have designed MAW specifically to take advantage of the multiple independent CPIs in forming the HRR profiles. From a radar resource perspective, for an HRR ATR system to be operationally useful, the system must operate at a signal-to-noise ratio (SNR) in the range of 20-25 dB. In this paper, we discuss the theoretical foundation underlying MAW and present corresponding MAW-processed HRR ATR results at 20-25db SNR compared against other image processing techniques such as weighted fast Fourier transform (FFT) and high definition vector imaging (HDVI). These results are based upon HRR profiles formed from synthetic aperture radar (SAR) images of targets taken from the high quality Moving and Stationary Target Acquisition and Recognition (MSTAR) data set. We also discuss the impact these image processing techniques have on the HRR ATR performance in terms of radar resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique to perform nonlinear 3D pattern recognition, and we analyze the performance of the Fourier plane nonlinear filters in terms of signal-to-noise (SNR). Using in-line digital holography, the complex amplitude distribution generated by a 3D object at an arbitrary plane located in the Fresnel diffraction region is recorded by phase-shifting interferometry. Information about the 3D object shape, location and orientation is contained in the digital hologram. This allows us to perform 3D pattern- recognition techniques using non-linear correlation filters. Then we obtain a range non linearities for which the SNR is robust to the variations in input noise bandwidth, which keeps the output SNR of the filter stable relative to changes in the noise bandwidth, using Karhunen-Loeve series expansion of the noise process. This is shown both by analytical estimates of the SNR for nonlinear filters as well as by experimental simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of tracking of a group of targets is considered in this paper. We will present an overview of an investigation into this problem by first using the targets velocity state vectors covariance matrix to establish target grouping and then by exploiting concepts derived from game theory, in particular the leader-follower techniques, and graph theory to represent and establish relationships that influence the tracking of objects that belong to a group formation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking ground moving targets in cluttered environment is a challenging problem in airborne surveillance. In this paper, we present a novel approach for classifying targets by exploiting the formation patterns of their subtarget motion estimates. The target's moving parts formation patterns are represented in terms of dynamic random graphs and the classification problem is reduced to that of graph identification. Target and subtarget motions are estimated by means of estimating their joint state vectors probability density functions that can be solved via a number of methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A FPR System under development for the Naval Air Warfare Center, China Lake, CA is funded under a SBIR, Phase II contract as an automatic target recognizer and tracker candidate for Navy fast-reaction, subsonic and supersonic, stand-off weapons. The FPR will autonomously detect, identify, correlate, and track complex surface ship and land based targets in hostile, high-clutter environments in real time. The novel FPR system is proven technology that uses an electronic implementation analogous to an optical correlator system, where the Fourier transform of the incoming image is compared against known target images stored as matched filter templates. FPR demonstrations show that unambiguous target identification is achievable in a ninety-five percent fog obscuration for over ninety-percent of target images tested. The FPR technology employs an acoustic dispersive delay line (DDL) to achieve ultra-fast image correlations in 90 microseconds or 11,000 correlations per second. The massively scalable FPR design is capable of achieving processing speeds of an order of magnitude faster using available ASIC technology. Key benefits of the FPR are dramatically reduced power, size, weight, and cost with increased durability, robustness, and performance - which makes the FPR ideal for onboard missile applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This project is aimed at analyzing EO/IR images to provide automatic target detection/recognition/identification (ATR/D/I) of militarily relevant land targets. An increase in performance was accomplished using a biomimetic intelligence system functioning on low-cost, commercially available processing chips. Biomimetic intelligence has demonstrated advanced capabilities in the areas of hand- printed character recognition, real-time detection/identification of multiple faces in full 3D perspectives in cluttered environments, advanced capabilities in classification of ground-based military vehicles from SAR, and real-time ATR/D/I of ground-based military vehicles from EO/IR/HRR data in cluttered environments. The investigation applied these tools to real data sets and examined the parameters such as the minimum resolution for target recognition, the effect of target size, rotation, line-of-sight changes, contrast, partial obscuring, background clutter etc. The results demonstrated a real-time ATR/D/I capability against a subset of militarily relevant land targets operating in a realistic scenario. Typical results on the initial EO/IR data indicate probabilities of correct classification of resolved targets to be greater than 95 percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years the Optical Sciences Division, Naval Research Laboratory (NRL) has been involved in the development of real-time hyperspectral detection, cueing, target location, and target designation capabilities. Under the Dark HORSE program it was demonstrated that a hyperspectral sensor could be used for the autonomous, real- time detection of airborne and military ground targets. This work has culminated in WAR HORSE, an autonomous real-time visible hyperspectral target detection system that has been configured for us on a Predator Unmanned Air Vehicle (UAV). The sensor system provides Predator with the ability to detect manmade objects in areas of natural background. The system consists of a visible hyperspectral imaging sensor, a real-time signal processor, a high-resolution visible line scan camera, an interface and control software application, and a data storage medium. The system is coupled to an on- board GPS/INS to provide target geo-location information and relevant data is transmitted to a ground station using line- of-sight down-link capabilities. The presented paper will provide an overview of the WAR HORSE sensor system hardware components and their integration aboard a Predator UAV. In addition, the results of a recently completed demonstration aboard the Predator UAV will be provided. This demonstration represents the first autonomous real-time hyperspectral target detection system to flown aboard a Predator UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support vector machines are classification algorithms based on quadratic programming that have been found to give excellent classification results on problems such as discriminating targets form backgrounds. A key capability of these algorithms is that they do not require a preprocessing step to determine feature vectors, yet preprocessing is still an important step in the classification process. We discuss the effects of preprocessing feature data on the support vectors and the classification results of support vector machines. We first give a short introduction to support vector machines. Several methods to preprocess the data before being sent to the support vector machine are discussed. Then the algorithm is applied a set of second- order stochastic textures defined by their covariance structure. The effect on the classification rate is then determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The implementation of computational systems to perform intensive operations often involves balancing the performance specification, system throughput, and available system resources. For problems of automatic target recognition (ATR), these three quantities of interest are the probability of classification error, the rate at which regions of interest are processed, and the computational power of the underlying hardware. An understanding of the inter-relationships between these factors can be an aid in making informed choices while exploring competing design possibilities. To model these relationships we have combined characterizations of ATR performance, which yield probability of classification error as a function of target model complexity, with analytical models of computational performance, which yield throughput as a function of target model complexity. Together, these constitute a parametric curve that is parameterized by target model complexity for any given recognition problem and hardware implementation. We demonstrate this approach on the problem of ATR from synthetic aperture radar imagery using a subset of the publicly released MSTAR dataset. We use this approach to characterize the achievable classification rate as a function of required throughput for various hardware configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance prediction of computer vision algorithms is of increasing interest whenever robustness to illumination variations, shadows and different weather conditions has to be ensured. The statistical model which is presented in this contribution predicts the algorithm performance under the presence of noise, image clutter and perturbations and therefore provides an algorithm-specific measure of the underlying image quality. For the prediction of the detection performance logistic regression using covariates defined by the properties of the vehicle signatures is used. This approach provides an estimate of the probability of a single vehicle signature being detected by a given detection algorithm. To describe the relationship between background clutter and the false alarm rate of the algorithm a severity measure of the image background is presented. After the construction of the algorithm model, the probability of a vehicle signature being detected and the false alarm rate are estimated on new data. The model is evaluated and compared to the true algorithm performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report results that demonstrate that an infrared (IR) target classifier, trained on synthetic-images of targets and tested on real-images, can perform as well as a classifier trained on real-images alone. We also demonstrate that the sum of real and synthetic-image databases can be used to train a classifier whose performance exceeds that of classifiers trained on either database alone. After creating a large database of 80,000 synthetic-images two subset databases of 7,000 and 8,000 images were selected and used to train and test a classifier against two comparably sized, sequestered databases of real-images. Synthetic-image selection was accomplished using classifiers trained on real-images from the sequestered real-image databases. The images were chosen if they were correctly identified for both target and target aspect. Results suggest that subsets of synthetic-images can be chosen to selectively train target classifiers for specific locations and operational scenarios; and that it should be possible to train classifiers on synthetic-images that outperform classifiers trained on real-images alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on demonstrating the complexity in the optimization process of the sequential algorithm, which is a multi-0stage algorithm with each stage using fewer bands than the previous stages. Specifically, this paper describes the process used to obtain the optimal confidence level and the class separation parameter to quantify the hyperspectral detection performance using the sequential algorithm with Chebyshev's inequality test. This paper also presents the computational complexity involved in reaching the optimum confidence level and the recommended methodology for lessening the computational burden. The detection performance for different spatial resolutions are presented and compared with the ARES baseline performance using all spectral bands. The Forest Radiance I database collected with the HYDICE hyperspectral sensor is utilized. Scenarios include targets in the open, with footprints of 1 m, 2 m and 4 m; and different times of day. The total area coverage and the number of targets used in this evaluation are approximately 10km2 and 108, respectively. The description of the database and sensor parameters can be found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A constant false alarm rate algorithm has been developed for use in multi-band mine detection. While it is often difficult to predict the spectral signatures of targets, the shape of the target may be known. This test exploits geometric target features and spectral differences between the target and the surrounding area. The algorithm is derived from a general statistical model of the data, which allows it to adapt to changing backgrounds and variable signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation filters are attractive for SAR automatic target recognition (ATR) due to their distortion tolerance ability. Recently, a new filter called the extended maximum average correlation height (EMACH) filter was shown to exhibit low false alarm rate while providing good distortion tolerance. The trade-off between distortion tolerance and clutter rejection is achieved in the EMACH filter by selecting a parameter (beta) . The performance of this filter was examined using a simulated SAR database. In this paper, we develop a new filter called the eigen EMACH filter. This filter is based on decomposing the EMACH filter using the eigen-analysis. We show that this filter has better generalization ability compared to the EMACH filter. Also, we illustrate that this filter exhibits a consistent performance over a wide range of (beta) values. In this paper, we show that this filter provides better representation of the desired class while retaining the clutter rejection capability of the original EMACH filter. We use the MSTAR databases to test the performance of this filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a shape space based approach for invariant object representation and recognition. In this approach, an object and all its similarity transformed versions are identified with a single point in a high-dimensional manifold called the shape space. Object recognition is achieved by measuring the geodesic distance between an observed object and a model in the shape space. This approach produced promising results in 2D object recognition experiments: it is invariant to similarity transformations and is relatively insensitive to noise and occlusion. Potentially, it can also be used for 3D object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two common image processing problems are determining the location of an object using a template when the size and rotation of the true target are unknowns and classifying an object into one of a library of objects again using a template-based matching technique. When employing a maximum likelihood approach to these problems, complications occur due to local maxima on the likelihood surface. In previous work, we demonstrated a technique for object localization which employs a library of templates starting from the smooth approximation and adding detail until the exact template is reached. Successively estimating the geometric parameters (i.e. size and rotation) using these templates achieves the accuracy of the exact template while remaining within a well-behaved 'bowl' in the search space which allows standard maximization techniques to be used. In this work, we show how this technique can be extended to solve the classification problem using a multiple template library. We introduce a steering parameter which at every scale, allows us to compute a template as a linear combination of templates in the library. The algorithm begins the template matching using a smooth blob which is the smooth approximation common to all templates in the library. As the location and geometric parameter estimates are improved and detail is added, the smooth template is 'steered' towards the most likely template in the library and thus classification is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have recently proposed an original approach for the statistical segmentation of an object, based on active contours. In this paper, we propose a comparison of several Hausdorff distances performances in dissimilarity measurements for silhouette discrimination. For this purpose, we apply on the silhouettes of a dataset six kinds of perturbations that can occur with an active contour technique and we compute the good discrimination rate versus the reject rate for each distance. We also propose a simple method to accelerate the calculus of the Hausdorff distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognizing targets in IR images has been a long-standing and challenging problem. In this paper we outline a combination of detection methods based on optimized feature discrimination and correlation based classification techniques that demonstrate an improvement ability to separate target from clutter. Preliminary results are shown on a small subset of data obtained from NVESD to illustrate the possible performance gains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many identification and target recognition applications, the incoming signal will have properties that render it amenable to analysis or processing in the Fourier domain. In such applications, however, it is usually essential that the identification or target recognition be performed in real time. An important constraint upon real-time processing in the Fourier domain is the time taken to perform the Discrete Fourier Transform (DFT). Ideally, a new Fourier transform should be obtained after the arrival of every new data point. However, the Fast Fourier Transform (FFT) algorithm requires on the order of N log2 N operations, where N is the length of the transform, and this usually makes calculation of the transform for every new data point computationally prohibitive. In this paper, we develop an algorithm to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computational order by a factor of log2 N. The algorithm can be modified to work in the presence of data window functions. This is a considerable advantage, because windowing is often necessary to reduce edge effects that occur because the implicit periodicity of the Fourier transform is not exhibited by the real-world signal. Versions are developed in this paper for use with the boxcar window, the split triangular, Hanning, Hamming, and Blackman windows. Generalization of these results to 2D is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is developed in the companion paper, to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computation than directly evaluating the DFT using the FFT algorithm, This reduces the computational order by a factor of log2 N. The algorithm is able to work in the presence of data window function, for use with rectangular window, the split triangular, Hanning, Hamming, and Blackman windows. In this paper, a hardware implementation of this algorithm, using FPGA technology, is outlined. Unlike traditional fully customized VLSI circuits, FPGAs represent a technical break through in the corresponding industry. The FPGA implements thousands of gates of logic in a single IC chip and it can be programmed by users at their site in a few seconds or less depending on the type of device used. The risk is low and the development time is short. The advantages have made FPGAs very popular for rapid prototyping of algorithms in the area of digital communication, digital signal processing, and image processing. Our paper addresses the related issues of implementation using hardware descriptive language in the development of the design and the subsequent downloading on the programmable hardware chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two modifications of Genetic algorithm (GA) are proposed that employ gradient analysis of the fitness function and are integrated with the main genetic procedure. Combination of the relative weighted error factor and adaptive size of the mutation pool accelerates convergence of the iterative process and indicates When the global optimum um solution is found. Local gradient correction of the initial pool during interactions refines the search procedure. Computational experiments show that both modifications can increase efficiency of GA when they are applied to an image registration problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the years imaging laser radar systems have been developed for military and civilian applications. Among the applications we note collection of 3D data for terrain modeling and object recognition. One part of the object recognition process is to estimate the size and orientation of the object. This paper concerns a vehicle size and orientation estimation process based on scanning laser radar data. Methods for estimation of length and width of vehicles are proposed. The work is based on the assumption that from a top view most vehicles' edges are approximately of rectangular shape. Thus, we have a rectangle fitting problem. The first step in the process is sorting of data into lists containing object data and data from the ground closest to the object. Then a rectangle with minimal area is estimated based on object data only. We propose an algorithm for estimation of the minimum rectangle area containing the convex hull of the object data. From the rectangle estimate, estimates of the length and width of the object can be retrieved. The first rectangle estimate is then improved using least squares methods based on both object and ground data. Both linear and nonlinear least squares methods are described. These improved estimates of the length and width are less biased compared to the initial estimates. The methods are applied to both simulated and real laser radar data. The use of the minimum rectangle estimator to retrieve initial parameters for fitting of more complex shapes is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, An approach for fast ship detection in infrared (IR) images based on multi-resolution attention mechanism is proposed. In order to realize real-time image analysis, attention mechanism is indispensable to focus the computational resources on only regions or information related to the task at hand. This paper discusses topics of: sampling model; index information generating or areas of interest (AOI) searching; next fixation point determination and target detection. Variance of the neighboring nodes in the periphery is used to form a saliency map of the image. Node with higher saliency is of greater possibility to be engine or other hot parts on a ship, while a straight line near below can confirm the ship hypothesis. In the paper, the sampling model is introduced, and then the 'index region' detection and the following saccade and analysis process are discussed. At the end of the paper, experimental results of the detection of ships with different size in infrared images are presented. Those demonstrate that our approach can find ship target effectively. Comparisons of performance with our approach and those of some other approaches are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for tumor detection in digital mammography are developed and tested using a large database of normal and abnormal images. These algorithms build on previous work ion digital mammography and involve using two basic schemes. Feature extraction using image re-mapping for histogram modification, wavelet decomposition, and estimating of pertinent statistical parameters. These features are then presented in different formats to a neural net-based detection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Theory and Automatic Target Recognition
Many classification problems use image or other high-dimensional data, and must be designed from training data. The design and analysis of such systems parameterized by unknown functions, based on a method of sieves to regularize the function estimates, is described. The test statistic is assumed to be the ideal test statistic with estimated functions substituted for the truth. The test statistic is decomposed into approximation error and estimation error components, providing analytical tools for determining the optimal sieve size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our work focuses on automated recognition of target obscured by clutter. Considering clutter as a filtered marked point Poisson process, a MMSE pose estimator of rigid objects in obscuring clutter is introduced. We also build asymptotic approximations for Bayesian posterior distributions based on the Fisher information. We relate the Fisher information to the Hilber-Schmidt pose estimator, the expected error of which is shown to be a lower bound on the error incurred by any other pose estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.