PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Group velocity is typically defined as the derivative of the dispersion relation. However, we show that definition is not appropriate in general and is only applicable to wave equations with constant coefficients. We generalize the concept of group velocity and also show that in general one also has group acceleration. Explicit expressions are derived and examples are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capability of ultra-wideband (UWB) radar systems for extracting and displaying signature information useful for target recognition purposes has been already demonstrated. The frequency content of the projected signals is designed to match the size and kind of prospective targets and environments. Low frequencies are required for deep penetration into the ground, and high frequencies for detailed target information. Such conflicting requirements cannot always be satisfied. The complex permittivity of a soil varies substantially with its moisture content. Dry soils have a relative permittivity close to that of most dielectric mines, with low contrast and detection difficulties as consequences. Moist soils have high complex-valued dielectric constant, which may prevent sufficient penetration of the high frequencies. Moisture content of the soil and target burial depth will alter the returned echo. Moreover, moisture content of the soil and target burial depth will distort the returned echo and hence also the target signature. In the present work we investigate the backscattered radar echoes of a metal target and a dielectric target under illumination by the waveform from an aboveground radar when they are buried at a few representative depths in Yuma soil of a few different moisture contents. These echoes are simulated by the Method- of-Moments (MoM) and then used to determine the targets' signatures as generated by a signal-adaptive time-frequency distribution. These time-frequency distributions can then be used as templates for actual target classification purposes using measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of classification of radar targets when both phases and amplitudes are used under diverse angles of polarization ellipse and ellipticity is considered in this paper. Rayleigh quotients and Bhattacharyya distances are derived on synthetic aperture radar target signatures and shown that target separations are on the order of magnitudes larger for this case than traditionally used amplitude only case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm for automatic segmentation of small vehicle targets in MSTAR images. The segmenter is based on a histogram threshold technique and is able to detect both target vehicles and their shadows, and it is divided into three parts. First, the main component of the pre-processing part is a morphological closing filtering which decreases the intensity of speckle in images. The second part of the segmenter performs a histogram threshold operation. It is built around the use of the EFC-based model selection algorithm to estimate an image histogram with a mixture of normal densities, and a new method to compute thresholds. In this paper, we introduce a new linear method for computing multi-level thresholds from a mixture of normal densities. The post-processing operation is performed in order to remove any small detected artefacts other than targets of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous feature detectors have been defined for detecting military vehicles in natural scenes. These features can be computed for a given image chip containing a known target and used to train a classifier. This classifier can then be used to assign a label to an un-labeled image chip. The performance of the classifier is dependent on the quality of the set of features used. In this paper, we first describe a set of features commonly used by the Automatic Target Recognition (ATR) community. We then analyze feature performance on a vehicle identification task in laser radar (LADAR) imagery. Our features are computed over both the range and reflectance channels. In addition, we perform feature subset selection using two different methods and compare the results. The goal of this analysis is to determine which subset of features to choose in order to optimize performance in LADAR Autonomous Target Acquisition (ATA).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work in the area of laser radar automatic target recognition of stationary mobile targets has utilized synthetic range imagery to derive correlation filters or templates. This paper examines some of the parameters that need to be considered when generating the synthetic data. In addition to the standard parameters such as range, target aspect angle, and sensor elevation, parameters such as sensor velocity, scan time, and target position in the scene can effect how well the synthetic data matches the real world data. The investigation of these parameters is conducted with synthetic data produced by Infrared Modeling and Analysis simulation package. Some of the synthetic imagery is then compared with measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition (ATR) algorithm performance is sensitive to variability in the observed target signature. Algorithms are developed and tested under a specific set of operating conditions and then are often required to perform well under very different conditions (referred to as Extended Operating Conditions, or EOCs). The stability of the target signature as the operating conditions change dictates the success or failure of the recognition algorithm. Laser vibrometry is a promising sensor modality for vehicle identification because target signatures tend to remain stable under a variety of EOCs. A micro-doppler vibrometry sensor measures surface deflection at a very high frequency, thus enabling the surface vibrations of a vehicle to be sensed from afar. Vehicle identification is possible since most vehicles with running engines have a unique vibration signature defined by the engine type. In this paper, we present an ATR algorithm that operates over data collected from a set of accelerometers. These contact accelerometers were placed at a variety of locations on three target vehicles to emulate an ideal laser vibrometer. We discuss a set of features that are useful for discrimination of the three different target categories. We also present classification results based on these features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Separable filters, because they are specified separately in each dimension, require less memory space and present opportunities for faster computation. Mahalanobis and Kumar1 presented a method for deriving separable correlation filters, but the filters were required to satisfy a restrictive assumption, and were thus not fully optimized. In this work, we present a general procedure for deriving separable versions of any correlation filter, using singular value decomposition (SVD), and prove that this is optimal for separable filters based on the Maximum Average Correlation Height (MACH) criterion. Further, we show that additional separable components may be used to improve the performance of the filter, with only a linear increase in computational and memory space requirements. MSTAR data is used to demonstrate the effects on sharpness of correlation peaks and locational precision, as the number of separable components is varied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of infrared (IR) target identification classifiers, trained on randomly selected subsets of target chips taken from larger databases of either synthetic or measured data, is shown to improve rapidly with increasing subset size. This increase continues until the new data no longer provides additional information, or the classifier can not handle the information, at which point classifier performance levels off. It will also be shown that subsets of data selected with advanced knowledge can significantly outperform randomly selected sets, suggesting that classifier training-sets must be carefully selected if optimal performance is desired. Performance will also be shown to be subject to the quality of data used to train the classifier. Thus while increasing training set size generally improves classifier performance, the level to which the classifier performance can be raised will be shown to depend on the similarity between the training data and testing data. In fact, if the training data to be added to a given set of training data is unlike the testing data, performance will often not improve and may possibly diminish. Having too much data can reduce performance as much as having too little. Our results again demonstrate that an infrared (IR) target-identification classifier, trained on synthetic images of targets and tested on measured images, can perform as well as a classifier trained on measured images alone. We also demonstrate that the combination of the measured and the synthetic image databases can be used to train a classifier whose performance exceeds that of classifiers trained on either database alone. Results suggest that it may be possible to select data subsets from image databases that can optimize target classifiers performance for specific locations and operational scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I promote using an alternative philosophy for the design of infrared-target-detection algorithms. This philosophy focuses on finding first and eliminating natural clutter from a scene, followed by finding and preserving candidate targets in that scene. The reverse approach is the most common adopted one in the infrared ATR (automatic target recognition) community. This alternative is appealing because it should significantly reduce the amount of out-of-context information to be processed by a classifier. I show how to apply sensor domain knowledge, common sense, and multivariate regression to the problem of infrared target detection. A proof-of-principle experiment and its results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hierarchical Target Model Analysis (HTMA) is an automatic pattern matching process for categorizing tactical targets. Stored target model information is re-projected into the image space using the sensor camera model state vector. The analysis is carried out in image gradient angle space for greater flexibility and reduced processing. Re-sampling the gradient angle space allows the classification process to work at a wider variety of target ranges. The target model database is built from an assortment of both target operating and background environmental conditions. Incremental classification is possible by applying the matching strategy at increasing target resolution levels that are either self or range closure induced. The first application of this process has been on thermal imagery. It can easily be extended to other image domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate several fusion techniques for designing a composite classifier to improve the performance (probability of correct classification) of FLIR ATR. The motivation behind the fusion of ATR algorithms is that if each contributing technique in a fusion algorithm composite classifier) emphasizes on learning at least some features of the targets that are not learned by other contributing techniques for making a classification decision, a fusion of ATR algorithms may improve overall probability of correct classification of the composite classifier. In this research, we propose to use four ATR algorithms for fusion. We propose to use averaged Bayes classifier, committee of experts, stacked-generalization, winner-takes-all, and ranking-based fusion techniques for designing the composite classifiers. The experimental results show an improvement of more than 5 % over the best individual performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cooperation between the European Aeronautic Defence and Space Company AG (EADS) and the Bavarian Ministry of the Interior was started at the beginning of 2001 to develop an application for automatic target recognition technology for police helicopter missions. The Bavarian Police Air Support Unit is the main support partner and first user. Bavarian polic helicopters are equipped with a modern infrared system (FLIR) especially for night missions. EADS has extensive knowledge in the area of sensor image exploitation and automatic target recognition (ATR). The technology has been developed for military aircraft reconnaissance missions. The same software kernel is used for a flight prototype which is integrated in a police helicopter. The integration concept presented in this paper is set up so as not to interfere the existing FLIR system in any way. The flight prototype which will be described in detail consists of standard hardware (COTS) components and has the main functionality of detecting pre-selected object classes. With the flight prototype a comprehensive in-flight testing of the automatic target recognition application is enabled. The test procedure and first results of the flight tests are explained with selected examples. The cooperation will go on to further enhance the operational effectiveness of the Bavarian police helicopters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imagery (HSI), a passive infrared imaging technique which creates images of fine resolution across the spectrum is currently being considered for Army tactical applications. An important tactical application of infra-red (IR) hyperspectral imagery is the detection of low contrast targets, including those targets that may employ camouflage, concealment and deception (CCD) techniques [1,2]. Spectral reflectivity characteristics were used for efficient segmentation between different materials such as painted metal, vegetation and soil for visible to near IR bands in the range of 0.46-1.0 microns as shown previously by Kwon et al [3]. We are currently investigating the HSI where the wavelength spans from 7.5-13.7 microns. The energy in this range of wavelengths is almost entirely emitted rather than reflected, therefore, the gray level of a pixel is a function of the temperature and emissivity of the object. This is beneficial since light level and reflection will not need to be considered in the segmentation. We will present results of a step-wise segmentation analysis on the long-wave infrared (LWIR) hyperspectrum utilizing various classifier architectures applied to both the full-band, broad-band and narrow-band features derived from the Spatially Enhanced Broadband Array Spectrograph System (SEBASS) data base. Stepwise segmentation demonstrates some of the difficulties in the multi-class case. These results give an indication of the added capability the hyperspectral imagery and associated algorithms will bring to bear on the target acquisition problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to automatic target cueing (ATC) in hyperspectral images, referred to as K-means reclustering, is introduced. The objective is to extract spatial clusters of spectrally related pixels having specified and distinctive spatial characteristics. K-means reclustering has three steps: spectral cluster initialization, spectral clustering and spatial re-clustering, plus an optional dimensionality reduction step. It provides an alternative to classical ATC algorithms based on anomaly detection, in which pixels are classified as type anomaly or background clutter. K-means reclustering is used to cue targets of various sizes in AVIRIS imagery. Statistical performance and computational complexity are evaluated experimentally as a function of the designated number of spectral classes (K) and the initially specified spectral cluster centers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents an algorithm based on Independent Component Analysis (ICA) for the detection of small targets present in hyperspectral images. ICA is a multivariate data analysis method that attempts to produce statistically independent components. This method is based on fourth order statistics. Small, man-made targets in a natural background can be seen as anomalies in the image scene and correspond to independent components in the ICA model. The algorithm described here starts by preprocessing the hyperspectral data through centering and sphering, thus eliminating the first and second order statistics. It then separates the features present in the image using an ICA based algorithm. The method involves a gradient descent minimization of the mutual information between frames. The resulting frames are ranked according to their kurtosis (defined by normalized fourth order moment of the sample distribution). High kurtosis valued frames indicate the presence of small man-made targets. Thresholding the frames using zero detection in their histogram further identifies the targets. The effectiveness of the method has been studied on data from the hyperspectral digital imagery collection experiment (HYDICE). Preliminary results show that small targets present in the image are separated from the background in different frames and that information pertaining to them is concentrated in these frames. Frame selection using kurtosis and thresholding leads to automated identification of the targets. The experiments show that the method provides a promising new approach for target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Composite correlation filters (also known as synthetic discriminant function or SDF filters) are attractive for automatic target recognition (ATR) due to their built-in shift-invariance and their potential for trading off distortion tolerance for discrimination. Although the recognition performances of many advanced correlation filters are attractive, their computational complexities can be daunting particularly for ATR applications where the target detection and identification must be achieved in limited time. In this paper, we discuss some methods to reduce the complexity of correlation filter design, performing the cross-correlation as well as the processing of the resulting correlation outputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boosting has emerged as a popular combination technique to refine weak classifiers. Pioneered by Freund and Schapire, numerous variations of the AdaBoost algorithm have emerged, such as Breiman's arc-fs algorithms. The central theme of these methods is the generation of an ensemble of a weak learning algorithm using modified versions of the original training set, with emphasis placed on the more difficult instances. The validation stage then aggregates results from each element of the ensemble using some predetermined rule. In this paper the wavelet decomposition based codebook classifier proposed by Chan et al. is used as the learning algorithm. Starting with the whole training set, modifications to the training set are made at each iteration by re-sampling the original training data set with replacement. The weights used in the re-sampling are determined using different algorithms, including AdaBoost and arc-fs. The accuracies of the ensembles generated are then determined using various combination techniques such as simple voting and weighted sums. Boosting improves upon the two classifier methods (K-means and LVQ) by exploiting their inherent codebook nature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an intelligent image compression system whereby regions of interest (ROI) and background information are coded independently of each other. We apply less compression (more bits) to regions of interest (targets), and more compression (fewer bits) to background data. This methodology preserves relevant features of targets for further analysis, and preserves the background only to the extent of providing contextual information. The resulting system dramatically reduces the bandwidth/storage requirements of the digital imagery, while preserving the target-specific utility of the imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One trend in modern object recognition from images is the use of multiple features and sensors which are combined for the object recognition task. To get better classification results the features used for the classification of the objects should be physically 'orthogonal'. To be independent of the kind of features and of their combination method, it is necessary to represent each feature in a unified measure. This measure should define the quality of the feature in the examined image. The measure must be unified, because only such a measure can be combined to a meaningful global result. This paper presents a method which normalizes different kinds of local features. A probabilistic approach is used which provides the unified measure. To map the feature information to a probabilistic interpretation, a generalized function model is used. It is largely independent of the type of application. Two examples of the presented method are shown. The first example uses the Chamfer-Distance to measure edge-features, the second one uses a gray-value correlation coefficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kernel-based Feature Extraction (KFE) is an emerging nonlinear discriminant feature extraction technique. In many classification scenarios using KFE allows the dimensionality of raw data to be reduced while class separability is preserved or even improved. KFE offers better performance than alternative linear algorithms because it employs nonlinear discriminating information among the classes. In this paper, we explore the potential application of KFE to radar signatures, as might be used for Automatic Target Recognition (ATR). Radar signatures can be problematic for many traditional ATR algorithms because of their unique characteristics. For example, some unprocessed radar signatures are high dimensional, linearly inseparable, and extremely sensitive to aspect changes. Applying KFE on High Range Resolution (HRR) radar signatures, we observe that KFE is quite effective on HRR data in terms of preserving/improving separability and reducing the dimensionality of the original data. Furthermore, our experiments indicate the number of extracted features that are needed for HRR radar signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper on adaptive image segmentation and classification describes research activities on statistical pattern recognition in combination with methods of object recognition by geometric matching of model and image structures. In addition, aspects of sensor fusion for airborne application systems like terminal missile guidance were considered using image sequences of multispectral data from real sensor systems and from computer simulations. The main aspect of the adaptive classification is the support of model-based structural image analysis by detection of image segments representing specific objects, e.g. forests, rivers and urban areas. The classifier, based on textural features, is automatically adapted to the changes of textural signatures during target approach by interpretation of the segmentation results of each actual frame of the image sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The modified versions of the basic genetic operations - reproduction, crossover and mutation - in evolutionary algorithm are proposed in relation to 2D grayscale image registration problem. Two modifications of the reproduction phase include deletion of clones and genes with the same or similar parameter values, and local correction of the reproduction pool. Local correction is implemented as two consecutive stages - random search and local refinement. The RC-crossover is introduced that takes advantage of the best genes of the population while avoiding a direct replacement of the worse parameter values with their better counterparts. Mutation with memory aims to explore all poorly represented areas of the search space in order to eliminate the possibility of overlooking a better (or the best) solution. Computational experiments show that proposed modifications can improve convergence of evolutionary procedure when they are applied to 2D grayscale image registration problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In military applications it is of utmost importance to detect and identify targets from Infra-Red (IR) sequences. Usually the targets of interest are of small size and moving with great velocity. In addition, the IR sequences are extremely noisy due to rampant systemic noise incurred by the sensing instrument and the noise from the environment. In this paper, we developed a method which can effectively detect and identify the targets of interest form the noisy IR sequences using the manipulation of both the temporal, spectral, and spatial bandpass filtering. With this method, first a bandpass filtering in the spectral domain is conducted in order to remove noises, especially the systemic ones. Next the candidate target locations are declared through a spectral bandpassing from the temporal pixel process, where by taking advantage of the fact that the targets of interest are fast moving the background and random noises are largely removed. The estimated targets of interest for each IR frame are further refined after a post- processing in the spatial domain. The final targets of interest are then declared after a consistency check-up across adjacent IR frames by use of an adaptive Hough transform. Experimental results based on this proposed method have suggested desirable performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically when one tackles an Automatic Target Detection (ATD) problem in image data, an assumption is made that the sample of target pixels is statistically different from the sample of background pixels in an immediate neighborhood of the target. Algorithms are then devised to recognize groups (or individual) outlier pixels as indicating a possible target to be further processed by an Automatic Target Recognition (ATR) algorithm. In this paper, we present a novel approach for image enhancement that raises the intensity of outlier pixels while suppressing background pixels. Thus, simple thresholding of the enhanced image becomes a powerful ATD algorithm. The approach is not a pixel-level algorithm as it is derived and implemented in the frequency domain. This also implies that, since the algorithm is not specifically intensity-based, low SNR targets can be significantly enhanced if the target frequency domain characteristics are outliers compared to the background frequency domain characteristics. Full performance statistics over a large and clutter rich IR dataset will be presented and compared to other ATD algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that Fourier telescopy is the version of laser imaging radar realized by object illuminating with interfering laser beams generated by laser transmitters arrays and forming object's image from the energy of the field, scattered by the object. The resolution, the contrast in speckle pattern and the number of the speckles in the Fourier-telescopic image in the dependence on the dimensions and the configurations of the transmitting aperture and the receiving aperture in the general view and with regard to the known GLINT design are investigated. Integral correlation and local measures of Fourier- telescopic image relationship to true object image are proposed. Proposed measures determine the Fourier-telescopic image quality. They are shown that the image quality is good enough if the sizes of the receiving and transmitting aperture are approximately equal and the numbers of the speckles in the images of the object details are large enough. It is shown the possibility to use integral for reliable object recognition and local measure for calculating the accuracy of the object details parameters estimation. Compact symmetrical schematic of transmitting and receiving apertures for Fourier telescopy through strongly inhomogeneous atmosphere is presented. This is the united aperture containing the receiving aperture, consisted of the receiving uniform sections with the spacing between them less than half their size and the transmitting aperture with two equal mutually orthogonal linear arrays of laser transmitters arranged inside the receiving aperture aria among the receiving sections. It is shown that each from linear arrays can be situated in any place of a receiving aperture. Proposed schematic ensures the reconstruction of the undistorted highly quality image of the object at a great distance approximately 40,000 km with resolution about 0.4m along mutually perpendicular directions in image plane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Corner detection is an essential feature extraction step in many image understanding applications including aerial image analysis and manufactured part inspection. Available corner detectors require the user to set critical manual thresholds, degrade under significant noise levels, or introduce high computational complexity. We present a nonlinear corner detection algorithm that does not require prior image information or any threshold to be set by the user. It provides 100% correct corner detection and fewer than 1 false positive corner per image when the contrast to noise ratio of the image is 6 or more, under Gaussian white noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Needless to say, large, high-contrast targets are more easily recognized than small, low-contrast targets. But at what size and contrast does recognition begin? The question is important when specifying the resolution and contrast requirements for a new imaging system, or when assessing the range of an existing system beyond which worsening resolution and contrast ruin serviceable performance. The question is addressed here, in a general way, under the assumption that recognition depends on the agent's ability to draw a line (extract an edge) around distinctive target features. If the target is small, moreover, with its recognizable facets occupying few pixels, and if line rendering suffers mainly due to noise or image speckle, then neither human or complex automatic target recognition systems have a clear advantage, one above the other, or above more tractable, statistically optimized pattern recognition algorithms. Thus a theory of optimal linear edge detection is proposed here as a plausible model for estimating the recognition limits of both human and automatic agents, making it possible to estimate when the line-rendering process, and hence, recognition, fails due to insufficient contrast for small targets. The method is used to estimate the shadow-background contrasts needed for the recognition of sea mines in sidescan sonar images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers at the Canada Centre for Remote Sensing of Natural Resources Canada are exploring the use of remotely sensed imagery to assist Search and Rescue in Canada. Studies have been examining the use of Synthetic Aperture Radar for the detection of crashed aircraft. Promising results have been obtained with techniques for detection of dihedrals in interferometric and polarimetric data. With further development in technologies and techniques, and improved coverage of the Canadian landmass by future spaceborne systems such as RADARSAT-2, it is expected that it will be possible to assist in Search and Rescue for land targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The specular nature of Radar imagery causes problems for ATR as small changes to the configuration of targets can result in significant changes to the resulting target signature. This adds to the challenge of constructing a classifier that is both robust to changes in target configuration and capable of generalizing to previously unseen targets. Here, we describe the application of a nonlinear Radial Basis Function (RBF) transformation to perform feature extraction on millimeter-wave (MMW) imagery of target vehicles. The features extracted were used as inputs to a nearest-neighbor classifier to obtain measures of classification performance. The training of the feature extraction stage was by way of a loss function that quantified the amount of data structure preserved in the transformation to feature space. In this paper we describe a supervised extension to the loss function and explore the value of using the supervised training process over the unsupervised approach and compare with results obtained using a supervised linear technique (Linear Discriminant Analysis --- LDA). The data used were Inverse Synthetic Aperture Radar (ISAR) images of armored vehicles gathered at 94GHz and were categorized as Armored Personnel Carrier, Main Battle Tank or Air Defense Unit. We find that the form of supervision used in this work is an advantage when the number of features used for classification is low, with the conclusion that the supervision allows information useful for discrimination between classes to be distilled into fewer features. When only one example of each class is used for training purposes, the LDA results are comparable to the RBF results. However, when an additional example is added per class, the RBF results are significantly better than those from LDA. Thus, the RBF technique seems better able to make use of the extra knowledge available to the system about variability between different examples of the same class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of nonmetallic antipersonnel mines with unknown depth is the focus of this study. We compare the performance of two possible detection schemes. The first is based on wavelet decomposition, and the second relies on signal extraction using blind source separation techniques. The detection performance is measured in terms of the probability of false alarm and the probability of detection. The impact of mine depth knowledge on detector performance is examined. The data utilized is the Ground Penetrating Radar (GPR) data provided by the Demining Technology Center (DeTeC). Only B-scan data is used in the experimental phase of this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.