PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A common approach to detecting targets in laser radar (LADAR) 3-dimensional x, y and z imagery is to first estimate the ground plane. Once the ground plane is identified, the regions of interest (ROI) are segmented based on height above that plane. The ROIs can then be classifed based on their shape statistics (length, width, height, moments, etc.) In this paper, we present an empirical comparison of three different ground plane estimators. The first estimates the ground plane based on global constraints (a least median squares fit to the entire image). The second two are based on progressively more local constraints: a least median squares fit to each row and column the image, and a local histogram analysis of the re-projected range data. These algorithms are embedded in a larger system that first computes the target height above the ground plane and then recognizes the targets based on properties within the target region. The evaluation is performed using 98 LADAR images containing eight different targets and structured clutter (trees). Performance is measured in terms of percentage of correct detection and false alarm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser vibrometry sensors measure minute surface motion colinear with the sensor's line-of-sight. If the vibrometry sensor has a high enough sampling rate, an accurate estimate of the surface vibration is measured. For vehicles with running engines, an automatic target recognition algorithm can use these measurements to produce identification estimates. The level of identification possible is a function of the distinctness of the vibration signature. This signature is dependent upon many factors, such as engine type and vehicle weight. In this paper, we present results of using data mining techniques to assess the identification potential of vibrometry data. Our technique starts with unlabeled vibrometry measurements taken from a variety of vehicles. Then an unsupervised clustering algorithm is run on features extracted from this data. The final step is to analyze the produced cluters and determine if physical vehicle characteristics can be mapped onto the clusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D target recognition is of significant interest because representing the object in 3D space couuld essentially provide a solution to pose variation and self-occlusion problems that are big challenges in 2D pattern recognition. Correlation filers have been used in a variety of 2D pattern matching applications and many correlation filter designs have been developed to handle problems such as rotations. Correlation filters also offer other benefits such as shift-invariance, graceful degradation and closed-form solutions. The 3D extension of correlation filter is a natural extension to handle 3D pattern recognition problem. In this paper, we propose a 3D correlation filter design method based on cylindrical circular harmonic function (CCHF) and use LADAR imagery to illustrate the good performance of CCHF filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of seamless scene integration from multiple 3-dimensional views of a location for surveillance or recognition purposes is one that continues to receive much interest. This technique holds the promise of increased ability to detect concealed targets, as well as better visualization of the scene itself. The process of creating an integrated scene 'model' from multiple range images taken at different views of the scene consists of several basic steps: (1) Matching of scene points across views, (2) Registration of the multiple views to a common reference frame, and (3) Integration of the multiple views into a complete 3D representation (such as a mesh or voxel space). We propose using a technique known as spin-map correlation to compute the initial scene point correspondences between views. This technique has the advantage of being able to perform the registration with minimal knowledge of viewing geometry or viewer location - the only requirement is that there is overlap between views. Registration is performed using the correspondences generated from spin-map matching to seed an Iterative Closest Point (ICP) algorithm. The ICP algorithm grows the list of correspondences and estimates the rigid transformation between the multiple views. Following registration of the disparate views, the surface is represented probabilistically in a voxel space that is then polygonised into a triangular facet model using the well-known marching cubes algorithm. We demonstrate this procedure using LADAR range images of an armored vehicle of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a method of acquiring ground vehicles in cluttered
environments by making use of three dimensional geometrical
features. Our approach exploits a wide spectrum of shape and structure attributes to achieve very high performance target detection and scene interpretation capabilities. The system we present begins with extraction of terrain from an observed scene, and proceeds to cluster the remaining points into macroscopic objects. These objects are then subjected to a series of tests that use textural and structural measures at multiple scales to discriminate targets from natural and manmade clutter. We present experimental results on a new set of synthetic three-dimensional data, demonstrating excellent target detection and false alarm suppression performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In target recognition applications of discriminant of classification analysis, each 'feature' is a result of a convolution of an imagery with a filter, which may be derived from a feature vector. It is important to use relatively few features. We analyze an optimal reduced-rank classifier under the two-class situation. Assuming each population is Gaussian and has zero mean, and the classes differ through the covariance matrices: ∑1 and ∑2. The following matrix is considered: Λ=(∑1+∑2)-1/2∑1(∑1+∑2)-1/2. We show that the k eigenvectors of this matrix whose eigenvalues are most different from 1/2 offer the best rank k approximation to the maximum likelihood classifier. The matrix Λ and its eigenvectors have been introduced by Fukunaga and Koontz; hence this analysis gives a new interpretation of the well known Fukunaga-Koontz transform. The optimality that is promised in this method hold if the two populations are exactly Guassian with the same means. To check the applicability of this approach to real data, an experiment is performed, in which several 'modern' classifiers were used on an Infrared ATR data. In these experiments, a reduced-rank classifier-Tuned Basis Functions-outperforms others. The competitive performance of the optimal reduced-rank quadratic classifier suggests that, at least for classification purposes, the imagery data behaves in a nearly-Gaussian fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation-based filters (e.g MACE, MACH) have been widely employed for automatic target acquisition. In general, a bank of filters is developed wherein each filter is trained to respond to a particular range of conditions (such as aspect angle). Individual filter outputs are utilized to determine a best match between objects in a scene and the training information. However, it is not uncommon for discrete clutter objects to correlate well with an individual filter, resulting in an unacceptable false alarm rate (FAR). It is the authors' hypothesis that although a clutter event may correlate well with an individual filter, there are discernable differences in the way clutter and targets correlate across the bank of filters. In this paper, the authors investigate a connectionist based approach that combines the individual filter outputs in a non-linear manner for improved performance. Particular attention is given to designing the correlation filter constraints in conjunction with the combination approach to optimize performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Although benefits may be realized through ATR processing of a single data source, fusion of information across multiple images and multiple sensors promises significant performance gains. A major challenge, as ATR fusion technologies mature, is the establishment of sound methods for evaluating ATR performance in the context of data fusion. The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data source, i.e., fusion-based ATR. This paper presents results from this program, focusing on the target detection and cueing aspect of the problem. The first step in assessing target detection performance is to relate the ground truth to the ATR decisions. Once the ATR decisions have been mapped to ground truth, the second step in the evaluation is to characterize ATR performance. A common approach is to vary the confidence threshold of the ATR and compute the Probability of Detection (PD) and the False Alarm Rate (FAR) associated with each threshold. Varying the threshold, therefore, produces an empirical performance curve relating detection performance to false alarms. Various statistical methods have been developed, largely in the medical imaging literature, to model this curve so that statistical inferences are possible. One approach, based on signal detection theory, generalizes the Receiver Operator Characteristic (ROC) curve. Under this approach, the Free Response Operating Characteristic (FROC) curve models performance for search problems. The FROC model is appropriate when multiple detections are possible and the number of false alarms is unconstrained. The parameterization of the FROC model provides a natural method for characterizing both the operational environment and the ability of the ATR algorithm to detect targets. One parameter of the FROC model indicates the complexity of the clutter by characterizing the propensity for false alarms. The second parameter quantifies the separability between clutter and targets. Thus, the FROC model provides a framework for modeling and predicting ATR performance in multiple environments. This paper presents the FROC model for single sensor data and generalizes the model to handle the fusion case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Every year, large volumes of imagery are collected for the sole purpose of evaluating Automatic Target Recognition (ATR) algorithms. However, this data cannot be used without adequate truthing information for each image. Truthing information typically consists of the types and locations of the targets present in the imagery. Specifying this information for a large number of images is tedious, time consuming, and error prone. In this paper, we present a complete truthing system we call the Scoring, Truthing, And Registration Toolkit (START). The first component is registration, which involves aligning heterogeneous and homogenous sensor images of the same scene to a common reference frame. Once that reference frame has been determined, the second component, truthing, is used to specify target identity, position, orientation, and other scene characteristics. The final component, scoring, is used to assess the performance of a given algorithm as compared to the specified truth. The scoring module allows statistical comparisons to assess algorithm sensitivity to specific operating conditions (e.g., sensitive to object occlusion).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the performance of ATR algorithms in clutter. The variability of target type and pose is accommodated by introducing a deformable template for every target type, with low-dimensional groups of geometric transformations representing position and pose. Signature variation of targets is taken into account by expanding deformable templates into robust deformable templates generated from the template and a linear combination of PCA elements, spanning signature intensities. Detection and classification performance is characterized using ROC analysis. Asymptotic expressions for probabilities of recognition errors are derived, yielding asymptotic error rates. The results indicate that the asymptotic error probabilities depend upon a parameter, which characterizes the separation between the true target and the most similar but incorrect one. It is shown that the asymptotic expressions derived almost accurately predict performance of detection and identification of targets occluded by natural clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the asymptotic performance analysis for target
detection-identification through Bayesian hypothesis testing
in infrared images is presented. In the problem, probabilistic
representations in terms of Bayesian pattern-theoretic framework
is used. The infrared clutter is modelled as a second-order
random field. The targets are represented as rigid CAD models.
Their infinite variety of pose is modelled as transformations on
the templates. For the template matching in hypothesis testing,
a metric distance, based on empirical covariance, is used. The asymptotic performance of ATR algorithm under this metric and Euclidian metric is compared. The receiver operating characteristic (ROC) curves indicate that using the empirical covariance metric improves the performance significantly. These curves are also compared with the curves based on analytical expressions. The analytical results predict the experimental results quite well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA has been considering the use of Ka-band for deep space missions primarily for downlink telemetry applications. At such high frequencies, although the link will be expected to improve by a factor of four, the current Deep Space Network (DSN) antennas and transmitters would become less efficient due to higher equipment noise figures and antenna surface errors. Furthermore, the weather effect at Ka-band frequencies will dominate the degradations in link performance and tracking accuracy. At the lower frequencies, such as X-band, conventional CONSCAN or Monopulse tracking techniques can be used without much complexity, however, when utilizing Ka-band frequencies, the tracking of a spacecraft in deep space presents additional challenges. The objective of this paper is to provide a survey of neural network trends as applied to the tracking of spacecrafts in deep space at Ka-band under various weather conditions, and examine the trade-off between tracking accuracy and communication link performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is about automatic target detection (ATD) in unmanned
aerial vehicle (UAV) imagery. Extracting reliable features under all conditions from a 2D projection of a target in UAV imagery is a difficult problem. However, since the target size information is usually invariant to the image formation proces, we propose an algorithm for automatically estimating the size of a 3D target by using its 2D projection. The size information in turn becomes an important feature to be used in a knowledge-driven, multi-resolution-based algorithm for automatically detecting targets in UAV imagery. Experimental results show that our proposed ATD algorithm provides outstanding detection performance, while significantly reducing the false alarm rate and the computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose adaptive anomaly detectors that find materials whose spectral characteristics are substantially different from those of the neighboring materials.The target spectral vectors are assumed to have different statistical characteristics from the background vectors. In order to detect anomalies we use a dual rectangular window that separates the local area into two regions-- the inner window region (IWR) and outer window region (OWR). The statistical spectral differences between the IWR and OWR is exploited by generating subspace projection vectors onto which the IWR and OWR vectors are projected. Anomalies are detected if the projection separation between the IWR and OWR vectors is greater than a predefined threshold. Four different methods are used to produce the subspace projection vectors. The four proposed anomaly detectors have been applied to HYDICE (HYperspectral Digital Imagery Collection Experiment)images and the detection performance for each method has been evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Test results of Automatic Target Recognition of IR images are described. They are mainly based on Template Matching Techniques and Synthetic Discriminant Functions (SDF) filters are adopted to increase the robustness of the method and to reduce computational load. Extensive tests are performed with a number of different scenarios and image noise levels. Dedicated refinements and operative adjustements to traditional approaches are implemented and described. The work has been originated with digitally generated target databases and proceeds with real IR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last ten years, many approaches have been proposed to address automatic target detection (ATD) in hyperspectral sensor imagery (HSI). Conspicuously missing from that list is a relatively unknown approach to time series analysis, called higher order zero-crossings (HOC). In this paper, we investigate HOC sequences and their application to target detection in HSI. HOC sequences are based on a surprisingly fruitful connection between bank filtering and signal zero-crossings. They are generated from the application of a bank of filters to a finite signal or time series having zero mean. The application of each filter from the bank changes the signal oscillation pattern and alters the zero-crossing count. Accordingly, the application of each member filter gives rise to a zero-crossing count. We consider the oscillation pattern changes, or variations thereof, as a function of frequency modulation (FM) that may be intrinsic to hyperspectral signatures and that apparently has never been exploited in the target community. Investigating FM functions in signatures triggered a natural behavior to investigate the existence of intrinsic AM (amplitude modulation) as well. Preliminary results indicate that intrinsic AM-FM characteristics of objects' hyperspectral signatures may be useful for target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that the overall performance of the automatic imaging target recognition system is strongly affected by the used detection technique. Recently, the wavelet and matching pursuit methods are merged together as an excellent methodology for detecting targets in a sequence of infrared images with high detection rate, and low false alarms. The wavelet transform is used as a detector of the regions of interest, which may include false alarms, while the matching pursuit uses the known target's features to reduce the clutters (or false alarms) from the wavelet output. Only the non-orthogonal matching pursuit is used for this purpose because its orthogonal version is more computationally expensive. This prevents the exploitation of the orthogonal matching pursuit, which can provide image modeling with less number of terms that can significantly faciliate the target extraction and clutter reduction. In this paper, we introduce the usage of the fast orthogonal search method, which is an orthogonal modeling technique, instead of matching pursuit for small infrared imaging target detection. The fast orthogonal search performs the orthogonalization process in more efficient way, so its computational time is much less than the original orthogonal matching pursuit. Moreover, the fast orthogonal search provides a precise extraction of the target's model parameters that may be used for tracking purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a novel algorithm based on Independent Component Analysis (ICA) for the detection of small targets present in hyperspectral images. Compared to previous approaches, the algorithm provides two significant improvements. First, an important speedup is obtained by preprocessing the data through spectral screening. Spectral screening is a technique that measures the similarity between pixel vectors by calculating the angle between them. For a certain threshold α, a set of pixel vectors is selected such that the angle between any two of them is larger than α and the angle between any of the pixel vectors not selected and at least one selected vector is smaller than α. In addition to significantly reducing the size of the data, spectral screening reduces the influence of dominating features. The second improvement is the modification of the Infomax algorithm such that the number of components that are produced is lower than the number of initial observations. This change eliminates the need for feature reduction through PCA, and leads to increased accuracy of the results. Results obtained by applying the new algorithm on data from the hyperspectral digital imagery collection experiment (HYDICE) show that, compared with previous ICA based target detection algorithms developed by the authors, the novel approach has an increased efficiency, at the same time achieving a considerable speedup. The experiments confirm the efficiency of ICA as an attractive tool for hyperspectral data processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultra-wideband (UWB) ground penetrating radar (GPR) systems are useful for extracting and displaying information for target recognition purposes. The frequency content of projected signals is
designed to match the size and type of prospective targets and environments. The soil medium is generally dispersive and, if moist, dissipative as well. Hence, target signatures whether in the
time, frequency or joint time-frequency domains, will substantially depend on the target's burial depth, and on the soil's moisture content. To be useful for target recognition purposes the signatures
of a given target must be known for several typical burial depths and soil moisture contents. These signatures are then used as templates in the classification process. In an attempt at reducing
the number of needed templates, we focus here on the propagation of the pulses in the dissipative soil medium. Disregarding for the moment the scattering interaction with the target, we examine the distortion of the emitted interrogating pulses as they propagate through the soil and are backscattered to the receiver. We simulated such returned target echoes earlier for several burial depths using a Method-of-Moments (MoM) code. They could then be all translated
to equivalent echoes from the target at some selected standardized depth and soil moisture, and vice-versa. A sufficiently accurate signal processing method for depth conversion could be employed to reduce the number of templates required for the correct classification of subsurface targets with a GPR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines a technique for detection of antipersonnel mines with varied unknown depths. The method attempted in this study is based on a subtractive fuzzy logic algorithm. A comparison of the false alarm rate, the detection rate as well as the error rate is used to test the performance of the detection scheme in cases where the mine depth is both known and unknown. The effect of the a priori knowledge of the data on the execution of the detection scheme is observed, as well as the effect of the SNR level used to train the fuzzy logic detector. The algorithm is tested using real GPR data representing anti-personnel nonmetallic mine and other objects such as stone, brick, or a metallic sphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral (HS) data contains spectral response information that provides detailed descriptions of an object. These new sensor data are useful in automatic target recognition applications. However, such high-dimensional data introduces problems due to the curse of dimensionality, the need to reduce the number of features (λ responses) used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). HS sensors produce high-dimensional data; this is characterized by a training set size (Ni) per class that is less than the number of input features (NF). A new high-dimensional generalized discriminant (HDGD) feature extraction algorithm and a new modified branch and bound (MBB) feature selection algorithm are described and compared to other feature reduction methods for two HS target detection applications (mine and vehicle detection). Both space and spectral parameters are adapted. A new blob-coloring hit-miss transform is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an adaptive target detection algorithm for FLIR imagery is proposed that is based on measuring differences between structural information within a target and its surrounding background. At each pixel in the image a dual window is opened where the inner window (inner image vector) represents a possible target signature and the outer window (consisting of a number of outer image vectors) represents the surrounding scene. These image vectors are preprocessed by two directional highpass filters to obtain the
corresponding image edge vectors. The target detection problem is formulated as a statistical hypotheses testing problem by mapping these image edge vectors into two transformations, P1 and P2, via Eigenspace Separation Transform (EST) and Principal Component Analysis (PCA). The first transformation P1 is a function of the inner image edge vector. The second transformation P2 is a function of both the inner and outer image edge vectors. For the hypothesis H1 (target): the difference of the two functions is small. For the hypothesis H0 (clutter): the difference of the two functions is large. Results of testing the proposed target detection algorithm on two large FLIR image databases are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of infrared imaging trackers and seekers will allow for the implementation of more smarter tracking algorithms, able to keep a positive lock on a targeted aircraft in the presence of countermeasures. Pattern recognition algorithms will be able to select targets based on features extracted from all possible targets images. Artificial neural networks provide an important class of such algorithms. In particular, probabilistic neural networks perform almost as optimal Bayesian classifiers, by approximating the probability density functions of the features of the objects. Furthermore, these neural networks generate an output that indicates the confidence it has in its answer. We have evaluated the the possibility of integrating such neural networks in an infrared imaging seeker emulator, devised by the Defense Research and Development establishment at Valcartier. We describe the characteristics extracted from the images and define translation invariant features from these. We give a basis for the selection of which features to use as input for the neural network. We build the network and test it on some real data. Results are shown, which indicate a remarkable efficiency of over 98% correct recognition. For most of the images on which the neural network makes its mistakes, even a human expert would probably have been mistaken. We build a reduced version of this network, with 82% fewer neurons, and only a 0.6% less precision. Such a neural network could well be used in a real time system because its computing time on a normal PC gives a rate of over 5,300 patterns per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral Imagery is characterized by its wealth of spectral information, which makes it ideal for spectral classification. High spectral resolution comes at the cost of spatial resolution, however, making spatial classification difficult. As part of a thrust to develop a more optimal approach that uses both spatial and spectral information, we examine how high spectral resolution can be used to enhance spatial pattern recognition. We focus on targets made up of less than about five pixels, and thus have little shape or orientation information in individual bands. We then use an "Adaptive Spectral Unmixing" (ASU) operator on the hyperspectral data to estimate sub-pixel abundances as accurately as possible. Noting that vehicles of interest are often symmetric shapes, we demonstrate that geometric moments can be useful tools for rotationally-invariant shape discrimination of small targets. We use a pattern-matching strategy for spatial pattern recognition, and use the moments to guide our search of potential target templates. This approach avoids the under-constrained problem of trying to distill source shape characteristics, in all of their possible variations, from the abundance space. We describe the software testing package used, and present the results of preliminary tests on hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new 1-D hybrid Automatic Target Recognition (ATR) algorithm is developed for sequential High Range Resolution (HRR) radar signatures. The proposed hybrid algorithm combines Eigen-Template based Matched Filtering (ETMF) and Hidden Markov modeling (HMM) techniques to achieve superior HRR-ATR performance. In the proposed hybrid approach, each HRR test profile is first scored by ETMF which is then followed by independent HMM scoring. The first ETMF scoring step produces a limited number of "most likely" models that are target and aspect dependent. These reduced number of models are then used for improved HMM scoring in the second step. Finally, the individual scores of ETMF and HMM are combined using Maximal Ratio Combining to render a classification decision. Classification results are presented for the MSTAR data set via ROC curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper I report new results in an ongoing study to address the problem of classification of radar targets when both phases and amplitudes are used under diverse angles of polarization ellipse and ellipticity is considered in this paper. Rayleigh quotient, Bhattacharyya, Divergence, Kolmogorov, Matusta, Kullback-Leibler distances, Bayesian Probability of Error, Divergence distance-based probability of error, Bhattacharyya distance-based probability of error and Receiver Operating Characteristic curves are derived for both amplitude-only and joint amplitude and phase on real synthetic
aperture radar target signatures and shown that target separations and classification performances are, consistently, on the order of magnitudes better for the case of joint amplitude and phase case than the traditionally used amplitude-only signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In automatic target recognition applications, an important task is to obtain denoised signal of the object. In this paper, the reconstruction of 1D deterministic signals, for example, range profiles, corrupted by random signal shift and additive white Gaussian noise using 2D bispectrum is considered. Combined bispectrum-filtering techniques based on smoothing the noisy bispectrum estimates by 2D linear and nonlinear filters are proposed. It is shown that bispectrum estimates obtained by the conventional direct bispectrum estimator are corrupted by fluctuation errors and are biased. The performance of the proposed bispectrum-based signal reconstruction methods is analyzed using two conventional criteria - the reconstructed signal fluctuation variance and bias. The numerical simulation results show that 2D filtering of real and imaginary components of noisy bispectrum estimates is most efficient in the sense of minimum MS errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our earlier work, a Two-Pass motion estimation Algorithm (TPA) was developed to estimate a motion field for two adjacent frames in an image sequence where contextural constraints are handeled by several Markov Random Fields (MRFs) and the most A Posteriori (MAP) configuration is taken to be the resulting motion field. Currently in the disciplines of digital library and video processing of utmost interest are the extraction and representation of visual objects. Instead of estimating motion field, in this paper we focus on segmenting out visual objects based on spatial and temporal properties present in two contiguous frames under the MRF-MAP-MFT scheme. To achieve object segmentation, within the framework of EM optimization a novel concept "motion boundary field" is introduced which can turn off interactions between different object regions and in the mean time remove spurious objerct boundaries. Furthermore, in light of the generally smooth and slow velocities in-between two contiguous frames, we found that in the process of calculating matching blocks, assigning different weights to different locations can result in better object segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of small, faint, and/or obscured targets in a sequence of noisy images is not trivial. In this case, target and background (texture) features are generally undistinguishable in the original image domain. So, the image has to be transformed to a domain in which those features can be separable. The wavelet transform has been shown to be an excellent methodology that image segmentation can be performed through exploiting the wavelet multi-scale analysis capability. This paper reviews general wavelet-based methods for image target detection. Although the paper reviews the most recent target detection methods using wavelet, which are available in open literature, it focuses on illustrating the different ideas of using wavelet coefficients as a tool for target-background separation. Furthermore, this paper has the objective to offer a quick look to the many approaches, to put in light the authors' most recent developments in this field, and to serve as a background for new advances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support Vector Machines (SVMs) have generated excitement and interest in the pattern recognition community due to their generalization, performance, and ability to operate in high dimensional feature spaces. Although SVMs are generated without the use of user-specified models, required hyperparameters, such as Gaussian kernel width, are usually user-specified and/or experimentally derived. This effort presents an alternative approach for the selection of the Gaussian kernel width via analysis of the distributional characteristics of the training data projected on the 'trained' SVM (margin values). The efficacy of a particular kernel width can be visually determined via one-dimensional density estimate plots of the training data margin values. Projecting the data onto the SVM hyperplane allows the one-dimensional analysis of the data from the viewpoint of the 'trained' SVM. The effect of kernel parameter selection on class-conditional margin distributions is demonstrated in the one-dimensional projection subspace, and a criterion for unsupervised optimization of kernel width is discussed. Empirical results are given for two classification problems: the 'toy' checkerboard problem and a high dimensional classification problem using simulated High-Resolution Radar (HRR) targets projected into a wavelet packet feature space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been reported that underwater target models, spheres and cylinders can be detected and classified in background acoustic noise. In this paper, the author presents his recent finding that underwater target is detectable in acoustic background noise in open waters. Using a resonance detection technique, G-Transform, the noise background of a number of AUTEC sample data files with mammal clicks were analyzed. From the noise backgrounds in these data files, a number of possible target signatures were observed. It suggests that real underwater targets may be detected and classified passively in background noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the author presents recent findings of applying passive Broadband Bionic Sonar technique to same data files with marine mammal "clicks". Using a resonance detection technique, a number of data files with mammal clicks were analyzed. From these data files, many unique mammal "click" signatures were observed. These results seem to indicate that individual marine mammals can be classified and possibly identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current generation of correlation systems attempting to provide a Single Integrated Picture (SIP) have concentrated on improving quality from the situational awareness (SA) and tracking perspective with limited success, while having not addressed the combat identification (CID) issue at all. Furthermore, decision time has lengthened, not decreased, as more and more sensor data are made available to the commanders; much of which is video in origin. Many efforts are underway to build a network of sensors including the Army's Future Combat System (FCS), Air Force Multi-mission Command and Control Aircraft (MC2A), Network-Centric Collaborative Targeting (NCCT), and the follow-on to the Navy's Cooperative Engagement Capability (CEC). Each of these programs has the potential to increase precision of the targeting data with successful correlation algorithms while eliminating dual track reports, but none have combined or will combine disparate sensor data into a cohesive target with a high confidence of identification. In this paper, we address an architecture that solves the track correlation problem using frequency plane pattern recognition techniques that also can provide CID capability. Also, we discuss statistical considerations and performance issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an aircraft recognition system, which addresses realistic concerns resulting from the imaging process and the environment surrounding the aircraft. The system employs a bottom up approach, where recognition begins by extracting low level features (e.g., lines), which are subsequently combined into more complex sets of line groupings representing parts of an aircraft, as viewed from a generic viewpoint. Knowledge about aircraft is represented in the form of whole/part shape description and the connectedness property, and is embedded in production rules, which primarily aim at finding instances of the aircraft parts in the image and checking the connectedness property between the parts. The system has demonstrated robustness against occlusion, shadows, excessive background clutter and many forms of image degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Grammars have been used for the formal specification of programming languages, and there are a number of commercial products which now use grammars. However, these have tended to be focused mainly on flow control type applications. In this paper, we consider the potential use of picture grammars and inductive logic programming in generic image understanding applications, such as object recognition. A number of issues are considered, such as what type of grammar needs to be used, how to construct the grammar with its associated attributes, difficulties encountered with parsing grammars followed by issues of automatically learning grammars using a genetic algorithm. The concept of inductive logic programming is then introduced as a method that can overcome some of the earlier difficulties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for automatic battle tank recognition and segmentation has been developed. This paper presents the design and implementation for this new algorithm. The main ideas, approaches, limitations, and possible extension for future work are also discussed. This approach consists of three phase. In the first phase (foreground and background separation), it discriminates the foreground targets from background based on the feature data such as gray (or color) levels, and statistical data such as gray levels distribution and histogram. In the second phase (preliminary individual target recognition), each individual target is detected by a region growth algorithm. Each possible target is reconstructed. In the third phase, the targets are recognized by syntactic analysis. The synactic analysis (in last phase) is to extract all basic components of a tank and determine the relative relationship among the components based on the analysis of the waveform of boundary distance function from the centroid. The experiments show very satisfactory result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smoke detection and monitoring is required for the implementation of advanced forest fire fighting strategies and validation of smoke dispersion models. The latter involve the measurement of smoke column properties. The method proposed in this paper is based on the application of computer-based image processing techniques to visual images taken from fire-spread tests. The method presented involves the application of wavelets and optical flow for fire smoke detection and monitoring. A set of experimental results are reported in the paper, showing the interest of the presented system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.