PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we formalize a theory for indicators designed to focus ISAR imagery of non-cooperative targets. These indicators represent variations of the Fisher information and entropy measures, and are capable of operating either in the spatial-frequency domain or in the spatial domain. This freedom of choice is advantageous since the information on the target's representation in either domain has phase and magnitude components, which can be efficiently exploited to resolve and focus the target's primary elements. These elements are displayed as radar cross section (RCS) distribution, we propose a phase correction algorithm based on parametric models of a target's temporal maneuvers. The approach is to quantify the phase non-linearities via the Fisher information or entropy measure that is dependent on motion parameter estimates. The optimization of these parameter estimates is a m-dimensional search problem that minimizes the focus quality indicator over a prescribed tolerance for a given SNR. The coordinates of this minimum point are subsequently used to generate a phase correction factor that eliminates image blurring, thus providing better focusing for effective target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several methods have been proposed for motion parameter estimation and motion compensation in SAR/ISAR images. Some of the methods work using the information from a prominent scatterer and may require that it be well isolated. In order to extract part of an image for selective motion compensation, a filtering process must take place and in this paper, we propose to filter the range profile information. We apply and extend some concepts and techniques from time-frequency analysis to range profile formation and processing. In particular, we use time-varying filtering to accomplish our goal of selecting and separately processing a component of the image information that represents an object with different motion than other parts of the image. We also consider the option of using a superresolution method that enhances the resolution of the short-time Fourier transform to improve the accuracy of the filtering process. The system that is considered for simulations is a stepped-frequency ISAR. Even though our application is motion compensation, this paper also serves to apply and improve time-frequency processing techniques for use in SAR/ISAR imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of target classification using synthetic aperture radar (SAR) polarizations is considered form a Bayesian decision point of view. This problem is analogous to the multi-sensor problem. We investigate the optimum design of a data fusion structure given that each classifier makes a target classification decision for each polarimetric channel. Thought the optimal structure is difficult to implement without complete statistical information, we show that significant performance gains can be made even without a perfect model. First, we analyze the problem from an optimal classification point of view using a simple classification problem by outlining the relationship between classification and fusion. Then, we demonstrate the performance improvement by fusing the decisions from a Gram Schmidt image classifier for each polarization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wide area site models are useful for delineating regions of interest and assisting in tasks like monitoring and change detection. They are also useful in registering a newly acquired image to an existing one of the same site, or to a map. This paper presents an algorithm for building a 2D wide area site model from high resolution, single polarization synthetic aperture radar (SAR) data. A three stage algorithm, involving detection of bright pixels, statistical segmentation of the data into homogeneous regions, and labeling/validation of segmentation results, is used for this task. Constant false alarm rate (CFAR) detectors are used for detecting bright pixels. Under assumptions of a suitable model for the statistical distribution of single polarization intensity or complex data, maximum likelihood labeling is used for initial segmentation. Knowledge of the acquisition parameters and other geometric cues are used to refine the initial segmentation and to extract man-made objects like buildings, and their shadows, as well as roads, from these images. When data from multiple passes of the same site is available, site models yield feature points which can be used to register the different images. In case complete information regarding the radar location, heading, and depression angle are available, the multiple views can be registered prior to site model construction, leading to improved performance. Site models are also useful for SAR data compression, where possible targets, man-made objects, and their neighborhoods are compressed losslessly and the background regions are compressed using lossy schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an ATR system based on gray scale morphology which has proven very effective in performing broad area search for targets of interest. Gray scale morphology is used to extract several distinctive sets of features which combine intensity and spatial information. Results of direct comparisons with other algorithms are presented. In a series of tests which were scored independently the morphological approach has shown superior results. An automated training systems based on a combination of genetic algorithms and classification and regression trees is described. Further performance gains are expected by allowing context sensitive selection of parameter sets for the morphological processing. Context is acquired from the image using texture measures to identify the local clutter environment. The system is designed to be able to build new classifiers on the fly to match specific image to image variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new technique of the automatic detection of change within synthetic aperture radar (SAR) images produced from satellite data. The interpretation of this type of imagery is difficult due to the combined effect of speckle, low resolution and the complexity of the radar signatures. The change detection technique that has been developed overcomes these problems by automatically measuring the degree of change between two images. The principle behind the technique used is that when satellite repeat orbits are at almost the same position in space then unless the scene has changed, the speckle pattern in the image will be unchanged. Comparison of images therefore reveals real change, not change due to fluctuating speckle patterns. The degree of change between two SAR images was measured by using the coherence function. Coherence has been studied for a variety of scene types: agricultural, forestry, domestic housing, small and large scale industrial complexes. Fuzzy set techniques, as well as direct threshold methods, have bee applied to the coherence data to determine places where change has occurred. The method has been validated using local information on building changes due to construction or demolition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an adaptive FIR filtering approach, which is referred to as the APES (amplitude and phase estimation of a sinusoid) algorithm, for interferometric SAR imaging. We apply the APES algorithm on the data obtained from two vertically displaced apertures of a SAR system to obtain the complex amplitude and the phase difference estimates, which are proportional to the radar cross section and the height of the scatterer, respectively, at the frequencies of interest. We also demonstrate how the APES algorithm can be applied to data matrices with large dimensions without incurring high computational overheads. We compare the APES algorithm with other FIR filtering approaches including the Capon and FFT methods. We show via both numerical and experimental examples that the adaptive FIR filtering approaches such as Capon and APES can yield more accurate spectral estimates with much lower sidelobes and narrower spectral peaks than the FFT method. We show that although the APES algorithm yields somewhat wider spectral peaks than the Capon method, the former gives more accurate overall spectral estimates and SAR images than the latter and the FFT method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a typical interferometric synthetic aperture radar (IFSAR) system employed for terrain elevation mapping, terrain height is estimated from phase difference data obtained from two phase centers separated spatially in the cross-track direction. In this paper we show how the judicious design of a three phase center IFSAR renders phase unwrapping, i.e., the process of estimating true continuous phases from principal values of phase, a much simpler process that inherent in traditional algorithms. With three phase centers, one IFSAR baseline fan be chosen to be relatively small so that all of the scene's terrain relief causes less than one cycle of phase difference. This allows computation of a coarse height map without use of any form of phase unwrapping. The cycle number ambiguities in the phase data derived from the other baseline, chosen to be relatively large, can then be resolved by reference to the heights computed from the small baseline data. This basic concept of combining phase data from one small and one large baseline to accomplish phase unwrapping has been previously employed in other interferometric problems. The new algorithm is shown to possess a certain form of immunity to corrupted interferometric phase data that is not inherent in traditional 2D path-following phase unwrappers. This is because path-following algorithms must estimate, either implicitly or explicitly, those portions of the IFSAR fringe data where discontinuities in phase occur. Such discontinuities typically arise form noisy phase measurements derived from low radar return areas of the SAR imagery. When wrong estimates are made as to where these phase discontinuities occur, errors in the unwrapped phase values can appear due to the resulting erroneous unwrapping paths. This implies that entire regions of the scene can be reconstructed with incorrect terrain heights. By contrast, since the new method estimates the continuous phase at each point in the image by a straightforward combination of only the measured phases from the small and large baseline, phase estimation errors are confined to that point. We derive quantitative expressions for the new algorithm that relate the probability of selecting the wrong phase cycle to parameters of the interferometer. We then demonstrate that use of median filtering can very effectively mitigate those cycle errors that do occur. By use of computer simulations, we show how the new algorithm is used to robustly construct terrain elevation maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The GeoSAR (geographic synthetic aperture radar) program is a Defense Advanced Research Projects Agency (DARPA) sponsored program organized in cooperation with the Jet Propulsion Laboratory (JPL) and the California Department of Conservation. Some aspects of the program have been existent for almost two years. The technical goal of the program has been the development of rapid-mapping radar technologies, and has now, as its principal challenge, the development of a capability for terrain mapping under foliage. In this paper, we discuss validation of current technology and examine the utility of data products currently produced by the Environmental Research Institute of MIchigan's (ERIM) IFSARE, JPL's TOPSAR, and JPL's AIRSAR. We find that ERIM's X-band IFSARE system produces elevation maps to better than 2-m accuracy. Based on this we determine that TOPSAR elevation maps are accurate to at least 5 m. We also demonstrate the utility of JPL's AIRSAR's L-band radar polarimetry for terrain classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since ISAR (inverse synthetic aperture radar) can convey information which may not be obtainable by other imaging means, research on applying ISAR to battle field awareness has been intensive. One highly desirable application of ISAR imagery is to reconstruct 3D ground truth, which provides depth information to enhance target recognition and tracking. This paper proposes a stereo vision approach to reconstruct 3D ground truth using ISAR imagery. The proposed approach includes three steps: multiscale feature extraction, stereo matching, and surface interpolation. The multiscale feature extraction is accomplished using a wavelet edge detector, which can smooth the signal and reduce noise at different levels. Stereo matching is implemented using an inverse filtering method, which provides the sparse disparity map for 3D depth information using extracted features. The surface interpolation takes the sparse data generated from stereo matching and interpolate to the dense surface data as the final output. Issues regarding where and how the stereo techniques used for ISAR differ from the ones for video images will be addressed. The initial test shows encouraging results. Future research directions and potential commercial applications are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers 3D target feature extraction via an interferometric synthetic aperture radar (IFSAR). Since IFSAR itself is a relatively new technology, a self- contained detailed derivation of the data model is presented. A set of sufficient parameter identifiability conditions of the data model and the Cramer-Rao bounds (CRBs) of the parameters estimates are also derived. Four existing 2D feature extraction methods are extended to estimate the 3D parameters of the target scatterers. A new non-linear least squares (NLS) parameters estimation method is also derived to extract the target features. Finally, numerical examples are presented to compare the performance of the presented methods with each other and with the corresponding CRBs. We show with numerical examples that among the three non-parametric methods, Capon has the best resolution. The parametric methods (MUSIC and NLS) can have much better resolution and provide much more accurate parameter estimates than the non-parametric methods. We also show that between the two parametric methods, NLS can be faster and provide much better parameter estimates than MUSIC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The P-3 ultra-wide band (UWB) UHF SAR system penetrates foliage and has interesting applications including detecting foliage-obscured man-made objects and providing information about the terrain underlying the foliage. The UWB SAR system has collected a variety of forest data and some foliage penetration examples will be presented. An interesting application of the sensor is two-pass interferometry where the underlying terrain topology is mapped. This is in contrast to higher frequency interferometric SAR (IFSAR) which maps the tops of the tree canopy. A comparison of UHF and X-band IFSAR imagery will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To quantify the agility of a synthetic aperture radar (SAR) system using an automatic target recognition (ATR) system to detect targets obscured by foliage, an ultra-wideband, UHF- band, polarimetric SAR was constructed by ERIM under ARPA funding and installed on a Navy P-3 aircraft controlled by the Naval Air Warfare Center. The system was implemented as an upgrade to the existing X-, L-, and C-band SAR system already on this aircraft. A series of experiments funded by ARPA and Wright Laboratory were undertaken in 1995 to investigate foliage penetration (FOPEN). In this paper, the data and ground truth collected and their utility for investigations of FOPEN phenomenology and ATR algorithms will be presented. These data are being placed into a database for distribution to ATR algorithm developers. The characteristics of the P-3 UWB SAR will be discussed. The image formation technique used will be presented, along with the RFI suppression techniques used. Of particular interest will be the technique used for the required motion compensation. Results from recent investigations using the P-3 UWB SAR data will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Army Research Laboratory (ARL), working with the University of Maryland Department of Electrical Engineering, recently developed a novel method for efficient recognition of resonances in imagery from ARL's ultra-wideband (UWB) SAR instrumentation system, currently being used in foliage- and ground-penetration studies. The recognition technique uses linear transforms (Fourier, wavelets, etc.) to provide a basis for the design of spectrally matched filters. Implementation of the technique is very straightforward: an expectation of the target ringdown is projected onto a transform basis set, yielding a set of spectral coefficients (the 'spectral template'). UWB SAR image data are projected onto the same basis set, yielding a second vector of coefficients (the 'spectral image'). A simple correlation coefficient is generated from the two vectors, providing a measure of co-linearity of the spectral template and the spectral image: higher correlation values indicate greater co-linearity. Exceeding a correlation threshold results in a target implemented--a single 32-megabyte bipolar SAR image can be processed in less than five minutes. Initial spectral-correlation efforts focused on canonical targets and the results have been widely reported. Current studies are focusing on tactical targets, such as CUCVs. Early results on CUCVs have shown that sa single resonance-based template can be sued effectively in the recognition of tactical targets. Ongoing studies have demonstrated a substantial reduction in the false-alarm rate over results reported previously. These results, as well as improvements in the recognitions-processing stage, are reported in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated target detection and cueing (ATD/C) capabilities are being developed at Loral for the Radar Detection of Concealed Time Critical Targets (RADCON) contract with Wright Laboratory (WL). The ATD/C algorithms use calibrated, fully polarimetric UHF band synthetic-aperture radar data collected by the ERIM/NAWC P-3 radar. A brief overview of data collected for RADCON algorithm development and testing is presented. An outline of the development and evaluation of discriminants used in the context of a Bayesian Neural Network (BNN) detector algorithm is described. The BNN algorithm was demonstrated under a previous WL concealed target detection ATD/C program. These algorithms will be hosted on a near real-time COTS parallel processor as part of a FOPEN airborne system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A physics-based approach to VHF/UHF foliage penetrating (FOPEN) SAR automatic target detection and recognition (ATD/R) uses signatures constructed directly from physical scattering predictions in a sequence of matched filter banks that span the complex angle-frequency-polarimetric observation space. While the matched filter can be made efficient using FFTs, the primary means of achieving overall algorithm efficiency is through the sequential testing. Physics-based sequential testing achieves efficiency by matching the modeling complexity in each step to its increasing level of recognition, while using parameter estimates from previous setups to limit the number of signatures under test. This keeps the total number of signatures tested from increasing geometrically as the ATR search dimensionality is increased to achieve higher levels of recognition. Since each step in the sequence is performed on a smaller portion of surviving data, most of the computation usually lies in the initial detection of man- made objects in forest clutter. In this paper, computational efficiency is compared between various architectures which differ in their use of a matched filter image formation screener. This screener may save up to 16 dB in detection, and 3 dB in image formation, processing requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MIT Lincoln Laboratory has developed a compete, end-to-end, automatic target detection/recognition (ATD/R) system for synthetic aperture radar (SAR) data. A data-adaptive approach has been developed to enhance SAR image resolution based on super-resolution techniques; this approach is called high-definition imaging. This paper quantifies the improvement in ATR performance from enhanced resolution SAR imagery in the Lincoln Laboratory ATD/R system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent developments in composite correlation filter methods have improved the recognition and classification of an object over a range of image distortions. These correlation filters can be used for the automatic target cueing or recognition images. These new filter methods can be optimized for different correlation criteria in order to improve the recognition capability of the filter. In this paper we will present results of designing these distortion- tolerant correlation filters with simulated SAR imagery and testing with real and simulated SAR targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scene matching algorithms are developed to locate a smaller scene in a much larger scene within a few pixels accuracy taking into consideration the distortions due to rotations, scale variations and seasonal changes, using radar images. These images are preprocessed and segmented to extract the features of natural and man-made structures present. Correlation methods and feature based methods are experimented for scene matching and results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider use of eigenvector feature inputs to our feature space trajectory (FST) neural net classifier for SAR data with 3D aspect distortions. We consider its use for classification and pose estimation and rejection of clutter. Prior and new MINACE distortion-invariant and shift- invariant filter work to locate the position of objects in regions of interest is reviewed. Test results on a number of SAR databases are included to show the robustness of the algorithm. New results include techniques to determine: the number of eigenvectors per class to retain, the number and order of final features to use, if the training set size is adequate, and if the training and test sets are compatible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the development and construction of a new system for robust, obscured object recognition by means of partial evidence reconstruction from object restricted measures. This new approach employs a partial evidence accrual approach to form both an object identity metric and an object pose estimate. The partial evidence information is obtained by applying several instances of the authors' linear signal decomposition/direction of arrival (LSD/DOA) pose estimation technique. LSD/DOA is a means for estimating object pose for possibly articulated objects with multiple degrees of pose freedom that avoids the use of search mechanisms and template matching. Each instance of application of the LSD/DOA system results in a pose estimate and match metric aimed at recognition of a portion of a desired target. Each such partial object recognizer is formed in such a way as to be exposed to no clutter input when positioned over the target component of interest when no obscuration of the target is present. This work was motivated by the fact that pose estimation in the LSD/DOA method is primarily degraded in practice by the presence of background clutter in the pose estimation filter's region of support. By exploiting several independent pose estimators based upon LSD/DOA's reciprocal basis set filters constructed for overlapping sub-regions of the object, we can construct a pose estimate that is independent of clutter in the unobscured case, and robust with respect to obscuration. Results presented here include receiver operating characteristic curves for SAR targets embedded in clutter with and without partial obscuration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognizing a target in SAR images is an important, yet challenging application of model-based vision. This paper describes a model-based SAR recognition system based on invariant histograms and deformable template matching techniques. An invariant histogram is a histogram of invariant values defined by geometric features such as points and lines in SAR images. Although a few invariances are sufficient to recognize a target, we histogram all invariant values given by all possible target feature pairs. This redundant representation enables robust recognition under severe occlusions typical of SAR recognition scenarios. Multi-step deformable template matching examines the existence of an object by superimposing templates over potential energy field generated from images or primitive features. It determines the template configuration which has the minimum deformation and the best alignment of the template with features. The deformability of the template absorbs the instability of SAR features. We have implemented the system and evaluated the system performance using hybrid SAR images, generated from synthetic model signatures and real SAR background signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wright Laboratory/DARPA MSTAR program is developing an innovative model-based vision approach to SAR ATR. Central to that approach is the predict-extract-match-search) subsystem that implements a 'hypothesis and test' approach to target recognition. The search module directs the search through hypothesis space and seeks to minimize the computational effort of that search. This paper presents the key concepts and issues behind the design of the MSTAR search module.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many ATR implementations, the treatment of peaks, shadows, and regions are handled very differently in terms of the extraction process, and in terms of the attributes of those features that are used for discrimination. An alternative approach is to derive a generalized filter for each feature that transforms a SAR image into a likelihood image where the height of each pixel is image relational graph (IRG) is an efficient and useful method by which the image can be segmented and from which features can be extracted. In this paper, we will describe IRG construction and processing techniques for segmentation and feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applying model-based vision techniques to SAR data is particularly challenging because of the inherent difficulty in generating accurate predictions of an electromagnetic signature and the variation of observed signatures to small changes in sensing conditions, imaging geometry, and object characteristics. In order to cope with these difficulties we are developing a robust feature matching model to be part of the moving and stationary target acquisition and recognition model-based automatic target recognition system. The goals of this matching module are: (1) generate correspondences between predicted features and features extracted from a SAR image, (2) evaluate the match based on the degree of uncertainty of the features and their degree of match, (3) refine the target position/orientation/articulation based on the feature correspondences, and (4) analyze residual mix- matches for cueing scene interpretations of unexplained image features. We are developing a probabilistic optimization matching approach based on a (1) Bayesian evaluation metric and (2) they dynamic solution of the best correspondences during the search of pose space. The system is designed to support a wide range of features (points, regions, and other composite features) in a wide range of situations, such as obscuration, attenuation, layover, and variable target articulations and configurations. Initial test results in these types of situations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach to simultaneous estimation of target category and pose from SAR imagery using a database of target model distance transforms organized in a hierarchical tree structure. Distance transforms are shown to provide a convenient method for distortion-based model matching without requiring specific feature associations. The technique provides an approach for categorizing targets under adverse conditions including partial obscuration and interference. We show that construction of a target hierarchy using clustering techniques can lead to a tree searching strategy that prunes the tree during the search and is guaranteed to locate the best-matching target models. We also provide empirical results using synthetic target model images produced by Xpatch and show how performance is affected by signature contamination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using a point scatterer assumption, high-frequency SAR phase histories can be modeled as a sum of 2D complex exponentials in additive noise. This paper summarizes our SAR signal modeling experience using the XPatch simulated scattering data. We apply several 2D parametric estimation techniques including 2D TLS-Prony, MEMP, 2D IQML, and 2D CLEAN to estimate the complex exponential model parameters. From the estimation results, we discuss the engineering trade-offs among memory requirement, computation requirement, and estimation accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic classification of target in synthetic aperture radar (SAR) imagery is performed using topographic features. Targets are segmented from wide area imagery using a constant false alarm rate (CFAR) detector. Individual target areas are classified using the topographical primal sketch which assigns each pixel a label that is invariant under monotonic gray tone transformations. A local surface fit is used to estimate the underlying function oat each target pixel. Pixels are classified based on the zero crossings of the first directional derivatives and the extrema of second directional derivatives. These topographic labels along with the quantitative values of second directional derivative extrema and gradient are used in target matching schemes. Multiple matching schemes are investigated including correlation and graph matching schemes that incorporate distance between features as well as similarity measures. Cost functions are tailored to the topographic features inherent in SAR imagery. Trade offs between the different matching schemes are addressed with respect to robustness and computational complexity. Classification is performed using one foot and one meter imagery obtained from XPATCH simulations and the MSTAR synthetic dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of high range resolution radar returns using multiscale features is considered. Because of the characteristics unique to radar signals, such as clutter and sensitivity to viewing angle change, classifiers using features extracted from a single scale do not meet the requirements of non-cooperative target identification (NCTI). We present a hierarchical ARMA model for modeling high range resolution radar signals in multiple scales and apply it to NCTI database containing 5000 test samples and 5000 training samples. We first show that the radar signal at a course scale follows an ARMA process if it follows an ARMA model at a finer scale. The model parameters at different scales are easily computed from the parameters at another scale. Therefore, the hierarchical model allows us to compute spectral features at the coarse scale without adding much computational burden. The multiscale spectral features at five scales are computed using the hierarchical modeling approach, and are classified by a minimum distance classifier. The multiscale classifier is applied to both poorly aligned data and better aligned data. For both data sets, about 95 percent of the radar returns were correctly classified, showing that the multiscale classifier is robust to misalignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarimetric diversity can be exploited in synthetic aperture radar (SAR) for enhanced target detection and target description. Detection statistics and target features can be computed from either polarimetric imagery or parametric processing of SAR phase histories. We adopt an M- ary Bayes classification approach and derive Bayes-optimal decision rules for detection and description of scattering centers. Scattering centers are modeled as one of M canonical geometric types with unknown amplitude, phase and orientation angle; clutter is modeled as one of M canonical geometric types with unknown amplitude, phase and orientation angel; clutter is modeled as a spherically invariant random vector. For the Bayes optimal decision rules, we provide a simple geometric interpretation and an efficient computational implementation. Moreover, we characterize the certainty of decisions by deriving an approximate posteriori probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper applies a recently-developed neural clustering scheme, called 'probabilistic winner-take-all (PWTA)', to image segmentation. Experimental results are presented. These results show that the PWTA clustering scheme significantly outperforms the popular k-means algorithm when both are utilized to segment a synthetic-aperture-radar image representing ship targets in an open-ocean scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future wide area surveillance systems such as the Tier II+ and Tier III- unmanned aerial vehicles (UAVs) will be gathering cast amounts of high resolution SAR data for transmission to ground stations and subsequent analysis by image interpreters to provide critical and timely information to field commanders. This extremely high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this high data rate presents two problems. First, wide bandwidth data link channels which would be needed to transmit this imagery to a ground station are both expensive and difficult to obtain. Second, the volume of data which is generated by the system will quickly saturate any human-based analysis system without some degree of computer assistance. The ARPA sponsored clipping service program seeks to apply automatic target recognition (ATR) technology to perform 'intelligent' data compression on this imagery in a way which will provide a product on the ground that preserves essential information for further processing either by the military analyst or by a ground-based ATR system. An ATR system on board the UAV would examine the imagery data stream in real time determining regions of interest. Imagery from those regions would be transmitted to the ground in a manner which preserved most or all of the information contained in the original image. The remainder of the imagery would be transmitted to the ground with lesser fidelity. This paper presents system analysis deriving the operational requirements for the clipping service system and examines candidate architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a compression algorithm is developed to compress SAR imagery at very low bit rate. A new vector quantization (VQ) technique called the predictive residual vector quantizer (PRVQ) is presented for encoding the SAR imagery. Also a variable-rate VQ scheme called the entropy- constrained PRVQ (EC-PRVQ), which is designed by imposing a constraint on the output entropy of the PRVQ, is designed. Experimental results are presented for both PRVQ and EC-PRVQ at high compression ratios. The encoded images are also compared with that of a wavelet-based coder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gabor transform is a combined spatial-spectral transform that provides local spatial-frequency and orientation analyses in overlapping image neighborhoods. This paper describes a system for compressing detected SAR images based on the Gabor transform. The effects of different quantizer on subjective and computed measures of image quality are examined. We compare scalar, vector, and trellis-coded quantizers. Because the Gabor transform is non-orthogonal, conventional bit allocation methods which are optimal for orthogonal transforms are suboptimal for the Gabor transform. We compare bit allocation methods based on the distortion-rate function and alternative methods based on the spatial-frequency characteristics of the human visual system (HVS). Trellis-coded quantizers with HVS-based bit allocators yield the best performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we will discuss the performance of a new wavelet based embedded compression algorithm on synthetic aperture radar (SAR) image data. This new algorithm uses index coding on the indices of the discrete wavelet transform of the image data and provides an embedded code to successively approximate it. Results on compressing still images, medical images as well as seismic traces indicate that the new algorithm performs quite competitively with other image compression algorithms. The evaluation for SAR image compression of it will be presented in this paper. One advantage of the new algorithm presented here is that the compressed data is encoded in such a way as to facilitate processing in the compressed wavelet domain, which is a significant aspect considering the rate at which SAR data is collected and the desire to process the data 'near real time'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The two-parameter constant false rate (CFAR) detector defines a local area where the shape and scale of the stencil are predetermined by physical considerations alone (target size) which may cause suboptimal performance of the detector in SAR imagery. In this paper, we propose a new CFAR stencil based on a family of gamma kernels which provide the ability of adapting the scale and shape of the stencil to offer the minimum false alarm, The new detector is called the gamma CFAR detector. The simulation results show that the gamma CFAR detector outperforms the tow- parameter CFAR detector in high-resolution, 1 ft. by 1 ft., fully polarimetric SAR imagery processed by the polarimetric whitening filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate topographic data over the global land and ice masses have numerous applications in areas of geology, geophysics, hydrophysics and polar ice research. It is well established that interferometric synthetic aperture radar (InSAR) is a method which may provide a means of estimating global topography with high spatial resolution and height accuracy. ONe implementation approach, that of utilizing a single SAR system in a nearly repeating orbit, is attractive not only for cost and complexity reasons but also in that it permits inference of changes in the surface over the orbit repeat cycle echoes. This paper analyzes InSAR spatial geometry model and gives a phase error model. The paper also discusses the characteristics of InSAR echo signal. Finally, the general procedure of InSAR imaging is outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hypothesis-and-test (HAT) is the backbone of the model based paradigm. When the size and complexity of hypothesis space is non-trivial, iterative hypothesis refinement may be required before a satisfactory solution is achieved. In these cases the success of a model-based paradigm hinges on its ability to demonstrate strong convergence properties. When attempting to evaluate such iterative performance, it is useful to think of HAT as a two component process. The forward component maps hypothesis state variables to predicted features, ranging from pixel gray scale values to high level geometric properties of shape. The inverse component maps differences between predicted and observed features to changes in hypothesis state. Coarse-to-fine search strategies attempt to reduce the search domain by first applying features which behave smoothly over large regions in hypothesis space. Once the state of the search domain has been reduced, finer, more discriminating features are used. A feature characterization methodology which relates directly to the feature behavior irrespective of the particular predict-extract-match (PEM) algorithms employed would be implementation independent and hence, most general. Unfortunately, our ability to observe features is directly dependent on a feature extraction paradigm; our ability to accurately hypothesize underlying target state is dependent on a feature prediction paradigm; and our ability to compare between predicted and extracted features is dependent on a feature matching paradigm. Hence, we have little alternative but to adopt a methodology to characterize features in the context of a specific PEM configuration. This paper presents such a methodology using a novel lattice matcher paradigm. This new approach matches regions in hypothesis space against an extraction and produces a match surface instead of a single score or likelihood. This surface is then used to compute hypothesis state modifications. The extent over hypothesis space where the lattice matcher provides good hypothesis refinement is used to determine where in a search sequence a feature is best used. Characterization results are presented for peak, target and shadow features in simulated synthetic aperture radar data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One key advantage of the model-based approach for automatic target recognition (ATR) is the wide range of targets and acquisition scenarios that can be accommodated without algorithm re-training. This accrues from the use of predictive models which can be adjusted to hypothesized scenarios on-line. Approaches which rely on measured signature exemplars as the source of reference data for signature matching are constrained to those scenarios represented in the reference data base. The moving and stationary target recognition (MSTAR) program will advance the state-of-the-art in model-based ATR by developing, evaluating, and testing algorithm performance against a set of extended operating conditions (EOCs) designed to reflect real-world battlefield scenarios. In addition to full 360 deg target aspect coverage over a range of depression angles, the EOCs include variations in squint angle, target articulation and configurations, obscuration due to occlusion and/or layover, and intra-class target variability. These conditions can have a profound impact on the nature of the target signature, necessitating the development of explicit prediction and reasoning algorithms to provide robust target recognition. This paper provides a tutorial description of the impact of the MSTAR EOCs on SAR target signatures. A brief background discussion of the SAR imaging process is presented first. This is followed by a description of the impact of each EOC category on the target signature along with synthetic imagery examples to illustrate this impact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We analyze a class of Bayesian, binary hypothesis-testing problems relevant to the classification of targets in the presence of pose uncertainty. When hypothesis H1 is true, we observe one of N1 possible complex-valued signal vectors, immersed in additive, white complex Gaussian noise; when hypothesis H2 occurs, we observe one of N2 other possible signal vectors, again immersed in noise. Given prior probabilities for H1 and H2, and also prior conditional probabilities for the presence of each of the signal vectors, the problem is to determine both a decision rule that minimizes the error probability and also the associated minimal problem is to determine both a decision rule that minimizes the error probability and also the associated minimal error probability. The optimal decision rule here is well-known to be a likelihood ratio test having a straightforward analytical form; however, the performance of this optimal test is intractable analytically, and thus approximations are required to calculate the probability of error. We devise an approximation based on the observation that both the numerator and denominator of the likelihood ratio test statistic consist of sums of lognormal random variables. Previous work has shown that such sums are well approximated as themselves having a lognormal distribution; we exploit this fact to obtain a simple, approximate error probability expression. For a specific problem, we then compare the resulting error probability numbers with ones obtained via Monte Carlo simulation, demonstrating good agreement between the two methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.