PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We investigate passive radar imaging of aircraft using reflected TV signals. Such passive multistatic ``radar'' has been developed to detect and track aircraft with good accuracy. The additional capability of image formation would help to identify targets. The Fourier space sampling provided by passive radar is nonuniform. For a given aircraft flight path, different receiver locations give rise to different sampling patterns. We simulate multistatic radar returns using Fast Illinois Solver Code (FISC) and show that a good sampling pattern can be used to form a recognizable target image using direct Fourier reconstruction. However, a bad sampling pattern can make it impossible to form a useful image. In the Gaithersburg, MD area, we can select a good receiver location using 21 or fewer channels, which provides good enough Fourier-space coverage to form a useful aircraft image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Exact SAR inversion for a linear aperture may be obtained using fast transform techniques. Alternatively, backprojection in time domain may be used which can also handle general curved apertures. In the past, however, backprojection has seldom been used due to its heavy computational burden. We show in the paper that the backprojection method can be formulated as an exact recursive method based on factorization of the aperture. By sampling the backprojected data in local polar coordinates it is shown that the number of operations is drastically reduced and can be made to approach that of fast transform algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequency scaling approach is a new spotlight SAR image formation algorithm. It precisely performs the range cell migration correction for dechirped raw data without interpolation by using a novel frequency scaling operation while residual video phase is corrected simultaneously. The computation requirements are lower than the other spotlight SAR image formation approaches such as polar format algorithm and range migration algorithm. In this paper, frequency scaling algorithm is applied to process high squint spotlight data. The new squint illumination geometry is defined and some modifications to the basic algorithm are presented. Point target simulations up to 45 deg squint angle are carried out to show the validity of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we study the performance of two existing autofocus algorithms in a difficult SAR scenario. One algorithm is the well known phase gradient autofocus (PGA) algorithm and the other is the more recent AUTOCLEAN. The latter was introduced particularly with ISAR autofocus of a small target in mind and has been shown to outperform the PGA when range misalignment is present. This was expected as AUTOCLEAN, as opposed to PGA, has a built-in ability to compensate for range misalignment. In most available studies of the above autofocus algorithms spatially variant phase errors are absent or insignificant. The data used here is far-field SAR data collected over a large range of aspect angles. The target area is large, hence significant motion through resolution cells (MTRC) occurs due to target scene rotation. The polar format algorithm (PFA) is applied prior to autofocus to handle MTRC and compensate for off-track platform motion. However, the platform motion measurements used in PFA are not precise enough to compensate for the off-track motion and left after PFA are phase errors corrupting the data. These phase errors are spatially variant due to the large target scene and this violates the models for the autofocus algorithms above. This in contrast with the previously mentioned studies. We show that the performances of the autofocus algorithms considered are much deteriorated by the presence of spatially variant phase error but in different ways since the averaging of the phase error estimates is made differently in the two algorithms. Based on our numerical study of the two autofocus methods we try to rank them with respect to their sensitivity to spatially variant phase errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most antenna beamforming is done with phase and amplitude weighting (either physically or computationally) on the returns from multiple antennas. This approach is based on the assumption that the signals involved are narrowband continuous-wave (cw) sources and, in fact, the antenna pattern is usually verified on a test range via a cw test source. When the antenna array has a limited number of elements spaced across a wide number of wavelengths, the grating lobes generated are typically large, resulting in 'ghost' returns and reduced target-to-clutter ratios. In contrast, we are working with radars that either use a short real-time pulse with a few megahertz of bandwidth, or use an equivalent step frequency or chirp source of appropriate bandwidth that is applied and transformed into an equivalent pulse through signal processing. In this paper, we present a time-domain backprojection image-formation approach and apodization technique that is more typically used in synthetic aperture radar. We show results from simulations showing the sidelobe performance of widely spaced sparse antenna arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Amplitude and Phase EStimation (APES) approach to amplitude spectrum estimation has been receiving considerably attention recently. We develop an extension of APES for the spectral estimation of gapped (incomplete) data and apply it to synthetic aperture radar (SAR) imaging with angular diversity. It has recently been shown that APES minimizes a certain least-squares criterion with respect to the estimate of the spectrum. Our new algorithm is called gapped-data APES and is based on minimizing this criterion with respect to the missing data as well. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm and its applicability to SAR imaging with angular diversity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The state-of-the-art in extremely versatile fine-resolution Synthetic Aperture Radar (SAR) systems allows incredibly fine resolution and accurate images to be formed over a wide range of imaging geometries (squint angles and depression angles). This capability in turn is allowing the fusion of multiple views of targets and scenes into very accurate 3-dimensional renderings of the same scenes and targets. With proper imaging geometry selections, relative height accuracy within a scene can easily be on the order of the resolution of the original SAR images, thereby rivaling the finest IFSARs even on the drawing boards, and without the height ambiguities typically associated with large-baseline IFSARs. Absolute accuracy is typically limited to the accuracy of SAR flight path knowledge, bounded typically by GPS performance. This paper presents the relationship of height accuracy to imaging geometry (flight path) selection, and illustrates conditions for optimum height estimates. Furthermore, height accuracy is related to 3-D position accuracy and precision over a variety of imaging geometries. Performance claims of height precision on the order of resolution are validated with experimental results that are also presented, using multiple aspects of a target scene collected from a high-performance single-phase-center SAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The improved quality of InSAR data suggests to utilize such data for analysis of urban areas. But, the phase information from which the height data is calculated, is often severely disturbed, depending on the signal to noise ratio. As a consequence, irregular height jumps occur even inside flat objects. In this paper we refer to investigations to stabilize and improve the InSAR height data. After preprocessing, a segmentation is carried out in the intensity and the height data. Inside the extracted segments the height data is smoothed, using the related intensity or coherence values as weights. For every segment the weighted average height is calculated. Preliminary hypotheses for buildings are identified a by significant height over surrounding ground. In a post-processing step, the intermediate results are analyzed and corrected due to a possible over- and under-segmentation. Adjacent objects with similar heights are merged and objects including shadow areas are split. The shadow areas are detected by structural image analysis in a production net environment exploiting collateral information, like sensor position and depression angle. The derived 3D information may be used for visualization or map update tasks. A test site including the airport of Frankfurt (Main) was chosen. For the visualization purpose, a 3D view of the smoothed height data is shown. The results are compared to a map and differences are depicted and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
First generation image compression methods using block-based DCT or wavelet transforms compressed all image blocks with a uniform compression ratio. Consequently, any regions of special interest were degraded along with the remainder of the image. Second generation image compression methods apply object-based compression techniques in which each object is first segmented and then encoded separately. Content-based compression further improves on object-based compression by applying image understanding techniques. First, each object is recognized or classified, and then different objects are compressed at different compression rates according to their priorities. Regions with higher priorities (such as objects of interest) receive more encoding bits as compared to less important regions, such as the background. The major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier. In this paper we describe a technique in which the image is first segmented into regions by texture and color. These regions are then classified and merged into different objects by means of a classifier based on its color, texture and shape features. Each object is then transformed by either DCT or Wavelets. The resulting coefficients are encoded to an accuracy that minimizes recognition error and satisfies alternative requirements. We employ the Chernoff bound to compute the cost function of the recognition error. Compared to the conventional image compression methods, our results show that content-based compression is able to achieve more efficient image coding by suppressing the background while leaving the objects of interest virtually intact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a coding theoretic approach is presented for the unsupervised segmentation of SAR images. The approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. Our image model is a Gaussian random field whose mean and variance functions are piecewise constant across the image. The model is intended to capture variations in both mean value (intensity) and variance (texture). We adopt a multiresolution/progressive encoding approach to this segmentation problem and use MDL to penalize overly complex segmentations. We develop two different approaches both of which achieve fast unsupervised segmentation. One algorithm is based on an adaptive (greedy) rectangular recursive partitioning scheme. The second algorithm is based on an optimally-pruned wedgelet-decorated dyadic partition. We present simulation results on SAR data to illustrate the performance obtained with these segmentation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Development of phase history calibration techniques is important for improving Synthetic Aperture Radar (SAR) scene modeling capabilities. Image data of complex scene settings is used for clutter database construction, and the resulting databases are used in conjunction with synthetic radar predictions of complex targets to predict synthetic SAR imagery. The current method of trihedral calibration is typically performed after image formation, using a ratiometric technique, which is highly dependent on calibration target position and orientation and ground truth accuracy. As part of a recent SAR research data collection, measurements were made on a calibration-grade, 6-meter diameter top hat in both a homogeneous scene and with controlled obscuration and layover conditions. This paper will discuss phase-history calibration target design and scenario design to support obscuration and layover studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATR using HRR-signatures have recently gained lot of attention. A number of classification methods have been proposed using different target descriptions. The traditionally used classifier utilizing mean square error between magnitude only range profiles and templates suffers from problems with interfering scatterers. Several attempts to improve the MSE classifier both during the template formation process and in the matching have been made. We have recently presented a method that matches complex HRR signatures to target descriptions that use scattering centers. This method handle the unknown phases of the centers and thus overcomes the problem of interference between scatterers. In this paper we compare our method with a number of other methods that uses magnitude only range profiles. Those includes Mean-templates, Eigen- templates and the Specular and Diffuse scattering models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a methodology for using high-range-resolution ground-moving-target-indicator (HRRGMTI) profiles to aid the association process in an automated tracker. The tracker uses a Kalman filter to estimate the state of targets. These state estimates are used in addition to HRRGMTI measurements to perform data association. The HRRGMTI profiles used to develop and test the system are simulated using the MSTAR dataset of synthetic aperture radar (SAR) imagery. The performance of the system was estimated using a computer simulation that modeled the radars, ground vehicles, and creation of HRRGMTI profiles. Performance relative to a tracker that uses only kinematic information to perform data association is computed and presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present era of limited warfare demands that warfighters have the capability for timely acquisition and precision strikes against enemy ground targets with minimum collateral damage. As a result, automatic target recognition (ATR) and Feature Aided Tracking (FAT) of moving ground vehicles using High Range Resolution (HRR) radar has received increased interest in the community. HRR radar is an excellent sensor for potentially identifying moving targets under all-weather, day/night, long-standoff conditions. This paper presents preliminary results of a Veridian Engineering Internal Research and Development effort to determine the feasibility of using invariant HRR signature features to assist a FAT algorithm. The presented method of invariant analysis makes use of Lie mathematics to determine geometric and system invariants contained within an Object/Image (O/I) relationship. The fundamental O/I relationship expresses a geometric relationship (constraint) between a 3-D object (scattering center) and its image (a 1-D HRR profile). The HRR radar sensor model is defined, and then the O/I relationship for invariant features is derived. Although constructing invariants is not a trivial task, once an invariant is determined, it is computationally simple to implement into a FAT algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Research Laboratory's Sensor Directorate is developing the next step in target recognition, a continuous identification capability. This capability consists of a wide range of algorithms, sensor modes and technologies that work in concert to the overall goal of identification of moving as well as stationary targets. Three major pieces of this emerging capability are stationary identification, correlation of this information with tracks and the moving target recognition technologies. Although a brief discussion of each of these pieces will be provided in this paper, the concentration will be on the actual algorithm and technologies of the moving target recognition exploiting High Range Resolution (HRR) radar mode and how this complements Ground Moving Target Indication tracking and Synthetic Aperture Radar Automatic Target Recognition. This paper will expand on the overall continuous ID vision, work performed under several efforts, new sources of HRR data, and how this data will push the state of the art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interest in signature phenomenology studies has increased steadily in the past several years. These studies are intended to support analysis into the advantages of increasing radar bandwidths for both stationary and moving target applications. Because measured data is often limited it is common to augment data with stepped frequency data from turn-table collections, compact range collections, or generated synthetically with products like XPatch. Airborne systems frequently employ linear frequency modulated (LFM) radars. The corresponding frequency response of the target is typically obtained with a de-chirp on receive process. The effects of this processing are well understood for point source scatterers; however, the effects have not been thoroughly analyzed for more general types of scatterers. Little attention has been placed on the effects of de-chirp on receive processing for moving targets. Previously the authors have derived the relationship between stepped frequency and LFM radar measurements for general stationary objects. Distortions are seldom apparent in images produced by these radars, but can be important for wideband measurements, particularly when resonant scattering mechanisms exist In this paper the authors extend results to the moving target case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SAR processing is optimized for motionless scenes. Moving objects cause artifacts like blurring or azimuth displacement in case of parallel or radial velocity components respectively. With along-track interferometry (SAR-MTI) even very slow radial velocities can be measured by the phase differences. Unfortunately, the phase information is often severely disturbed, depending on a insufficient signal to noise ratio. In this paper we refer to investigations to stabilize and improve the SAR-MTI velocity data. Reliability is enhanced by a combined exploitation of phase and intensity. After speckle filtering a binary mask is generated from the intensity data to fade out regions with insufficient signal to noise ratio, like regions with low backscattering coefficient. In a next step for every point in the intensity image the radial velocity is calculated by the phase difference of two channels. This image of velocities is masked with the binary mask derived from the intensity image. A region growing process is initiated in the velocity image to identify connected regions in the image with similar velocity. By this process we get first hints for moving objects. The approach was applied to images with slow moving cargo ships inside and nearby locks. The cargo ships are segmented and described by a simple model. Only cargo ships with a minimum velocity which match the longitudinal and transversal extension features of the model are concerned in further processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usually inverse synthetic aperture radar (ISAR) imaging is for small aircraft, with long range, moreover the coherent integration angle is small, that is the target's wavenumber spectrum support region can be regard as a rectangle, Range-Doppler(RD) algorithm or Range-Instantaneous-Doppler (RID) algorithm are employed for image reconstruction after translational motion compensation (TMC), which includes envelope alignment (such as envelope correlation algorithm, minimum entropy algorithm) and autofocus (such as single PPP algorithm, multiple PPP algorithm, PGA, weighted least square algorithm). But migration through resolution cell (MTRC) is not considered after TMC, in fact, the scatterers around the target usually take place MTRC if the size of target is large. In the paper, we first align and focus the high resolution radar target echoes according target center, then we do time scale transform in target's wavenumber domain, that is Soumekh proposed 'keystone' interpolation to compensate MTRC (which can also be realized rapidly by DFT-IFFT or SFT-IFFT in azimuth direction), after range compression (range IFFT), for steadily flying target, target image can be obtained only after azimuth compression (that is FFT in azimuth direction), for maneuvering target, time-frequency analysis must be taken for every range cell, and the existing instantaneous imaging algorithms (such as joint time-frequency distribution algorithm, Radon-Wigner algorithm) are also effective to obtain RID images. This paper gives the ISAR imaging algorithm flow diagram to obtain images from raw data of steadily flying and maneuvering big targets, and simulate data and real data prove that algorithm flow is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) images are very useful for target recognition as they can avoid some of the shortcomings of optical cameras and infrared imagers. However, due to noise and clutter in the environment, the targets are hard to locate without preprocessing and target isolation algorithms. Here we propose an algorithm for enhancing target recognition. The algorithm consists of two steps. First, median filtering is performed to eliminate some speckles. Although median filter is simple, its performance is comparable to a method in the literature. Second, a method known as Sliding Quadrant developed by Intelligent Automation, Inc. is used to locate the potential targets in the SAR images. Our method achieves much better target isolation than a well-known method in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army Research Laboratory has investigated various phenomenology-based approaches for improving the detection of targets in wide-angle, ultra-wideband foliage penetration synthetic aperture radar (SAR) data. The approach presented here exploits the aspect-dependent reflectivity of vehicles, by filtering the SAR image data to obtain sub-aperture images from the original full-aperture radar image. These images represent the images of the target as seen by the sub-aperture SAR from two different locations (squint angles). We present a straightforward approach to extending an existing collection of features for a quadratic polynomial discriminator with features calculated from these two, lower-resolution sub-aperture SAR data images. We describe a method for generating the modified features and assess their potential contribution to improved probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many common clustering algorithms, such as the fuzzy C-means and the classical k-means clustering algorithms, proceed without making any assumptions about the form of the detector that will use the parameters that they determine. We compare the performance of a radial basis function (RBF) network with parameters that are determined using a modified fuzzy clustering procedure to that of an RBF network with parameters that are determined using a least-mean-square- error (classical) clustering procedure. As part of the fuzzy clustering procedure, we assume a particular functional form for the fuzzy membership function. We train and test both of the networks on simulated data and present performance results in the form of receiver operating characteristic curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of spectral feature computations for purposes of discriminating military targets from clutter are currently under investigation within the Radar Branch at the Air Force Research Laboratory. Results from a comparative performance analysis of these features are reported. The development and analysis of spectral phase computations are of particular interest since, for some 'hard clutter' environments, the use of amplitude-based discriminants does not generate a sufficiently low false alarm rate. These phase computations are based on the analysis of the Fourier phase function, analysis of the phase spectral density, and analysis of the bispectrum. Additional spectral features, such as features based on angular diversity, are also included within the scope of this investigation. The data for this investigation is comprised of some SAR images and image chips that were collected and generated under the DARPA/Air Force Moving and Stationary Target Acquisition and Recognition (MSTAR) Program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) and feature-aided tracking (FAT) algorithms that use one-dimensional (1-D) high range resolution (HRR) profiles require unique or distinguishable target features. This paper explores the use of Xpatch extracted scattering centers to generate synthetic moving ground target signatures. The goal is to develop a real-time prediction capability for generating moving ground target signatures to facilitate the determination of unique and distinguishable target features. The repository of moving ground target signatures is extremely limited in target variation, target articulation, and aspect and illumination angle coverage. The development of a real-time moving target signature capability that provides first order moving target signature will facilitate the development of features and their analysis. The proposed moving target signature simulation is described in detail and includes both the strengths and weaknesses of using a scattering center approach for generation of moving target signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continuing interest in ATR, there is a need for high-resolution fully polarimetric data on tactical targets at all radar bands. Here we describe a newly developed system for acquiring W-band data with 1/16 scale models. The NGIC sponsored ERADS project capability for obtaining fully polarimetric ISAR imagery now extends from X to W band.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study the polarization scattering matrices (PSM) of individual scatterers from a complex tactical ground target were measured as a function of look angle. Due to the potential value of PSMs in studies of automatic target recognition, a fully polarimetric, 3D spot scanning radar modeling system was developed at 1.56 THz to study the W- band scattering feature behavior from 1/16th scale models of targets. Scattering centers are isolated and coherently measured to determine the PSMs. Scatterers of varying complexity from a tactical target were measured and analyzed, including well-defined fundamental odd and even bounce scatterers that maintain the exact normalized PSM with varied look angle, scatterers with varying cross- and co-pol terms, and combination scatterers. Maps defining the behavior of the position and PSM activity over varying look angle are likely to be unique to each target and could possibly represent exploitable features for ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) is an essential sensor for military surface surveillance because of its unique ability to operate day or night through weather, smoke and dust. Of particular importance is the problem of automatic target recognition (ATR) which aims to identify targets of military significance within radar images. The Defence Evaluation and Research Agency of the UK has a substantial program of research into ATR algorithms for SAR battlefield surveillance. This covers both feature-based techniques which are the subject of this paper and model-based techniques. Feature-based ATR discriminates between target classes on the basis of the values taken by certain target features. The conventional approach is to select the best features for a particular task from a large set of features which have been pre-defined on the basis of physical intuition. A simple feature might be target area whilst a more sophisticated feature might be some measure of fractal dimension. ATR performance will be influenced by the type of features used and by the accuracy with which the statistical behavior of these features despite limited examples of target realizations. It also addresses the problem of feature choice by introducing a method for adaptive feature design which automatically recognizes the information not already contained in the existing feature set and develops a feature to represent this missing information. These ideas are illustrated by application to synthetic aperture radar images of vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarimetric synthetic aperture radar (POLSAR) provides additional information about the scatterers and clutter in a scene over that of single-band SAR. A fully polarimetric sensor contains four imaging channels that, when properly calibrated, can indicate the type of scatterers present. For example, it is possible to discriminate between trihedral-, dihehdral-, and dipole-like scatterers. The orientation of the scatterers can also be extracted. Based upon this additional information, hypotheses can be generated about the objects in the scene that are richer than those generated from single-band data. Combinations of transmission and reception with antennae that ideally represent orthogonal, balanced polarimetric states generate the four channels of the POLSAR system. In practice, the antenna elements are not perfect; crosstalk and imbalances exist between them, so that calibration is necessary. This paper addresses the calibration of POLSAR data, and introduces some new approaches to this problem. These include a novel gradient descent algorithm for crosstalk removal and the application of a rotating dihedral to the calibration of a sensor with receiver characteristics that are transmit-state dependent. The sensitivity of the Cloude polarimetric decomposition to varying amounts of crosstalk and imbalance in an imperfectly calibrated data set is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of finding the stored template that is closest to a given input pattern is a typical problem in vector quantization (VQ) encoding and nearest neighbor (NN) pattern classification. This paper presents a new Triangle Inequality Nearest Neighbor Search (TINNS) algorithm that significantly reduces the number of distance calculations. This algorithm is appropriate in applications for which the computational cost of making a distance calculation is relatively expensive. Automatic Target Recognition (ATR) is one such application. This new algorithm achieves improved performance by guiding the order in which templates are tested, and using inequality constraints to prune the search space. We compare TINNS with another competing approach, as well as exhaustive search, and show that there is an appropriate application domain for each algorithm. Results are given for three applications, VQ for image compression, NN search over random templates, and target recognition in synthetic aperture radar image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the complexity of template-based ATR algorithms using SAR imagery as an example. Performance measures (such as Pid) of such algorithms typically improve with increasing number of stored reference templates. This presumes, of course, that the training templates contain adequate statistical sampling of the range of observed or test templates. The tradeoff of improved performance is that computational complexity and the expense of algorithm development training template generation (synthetic and/or experimental) increases as well. Therefore, for practical implementations it is useful to characterize ATR problem complexity and to identify strategies to mitigate it. We adopt for this problem a complexity metric defined simply as the size of the minimal subset of stored templates drawn from an available training population that yields a specified Pid. Straightforward enumeration and testing of all possible template sets leads to a combinatorial explosion. Here we consider template selection strategies that are far more practical and apply these to a SAR- and template-based target identification problem. Our database of training templates consists of targets viewed at 3-degree increments in pose (azimuth). The template selection methods we investigate include uniform sampling, sequential forward search (also known as greedy selection), and adaptive floating search. The numerical results demonstrate that the complexity metric increases with intrinsic problem difficulty, and that template sets selected using the greedy method significantly outperform uniformly sampled template sets of the same size. The adaptive method, which is far more computationally expensive, selects template sets that outperform those selected by the greedy technique, but the small reduction in template set size was not significant for the specific examples considered here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern recognition of ISAR small ship images is considered. These represent a formidable new distortion-invariant pattern recognition problem. Variations to the standard image formation steps were used. The preprocessing used is noted. These include new techniques to produce data with horizontal deckline and proper upward superstructure. New algorithms to determine if an input image is useful are developed; These were necessary for such data. Initial recognition results using new distortion-invariant filters are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this paper is optimizing recognition models for Synthetic Aperture Radar signatures of vehicles to improve the performance of a recognition algorithm under the extended operating conditions of target articulation, occlusion and configuration variants. The recognition models are based on quasi-invariant local features, scattering center locations and magnitudes. The approach determines the similarities and differences among the various vehicle models. Methods to penalize similar features or reward dissimilar features are used to increase the distinguishability of the recognition model instances. Extensive experimental recognition results are presented in terms of confusion matrices and receiver operating characteristic curves to show the improvements in recognition performance for MSTAR vehicle targets with articulation, configuration variants and occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes confidence interval (CI) estimators (CIEs) for the metrics used to assess sensor exploitation algorithm (or ATR) performance. For the discrete distributions, small sample sizes and extreme outcomes encountered within ATR testing, the commonly used CIEs have limited accuracy. This paper makes available CIEs that are accurate over all conditions of interest to the ATR community. The approach is to search for CIs using an integration of the Bayesian posterior (IBP) to measure alpha (chance of the CI not containing the true value). The CIEs provided include proportion estimates based on Binomial distributions and rate estimates based on Poisson distributions. One or two-sided CIs may be selected. For two-sided CIEs, either minimal length, balanced tail probabilities, or balanced width may be selected. The CIEs' accuracies are reported based on a Monte Carlo validated integration of the posterior probability distribution and compared to the Normal approximation and `exact' (Clopper- Pearson) methods. While the IBP methods are accurate throughout, the conventional methods may realize alphas with substantial error (up to 50%). This translates to 10 to 15% error in the CI widths or to requiring 10 to 15% more samples for a given confidence level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To ensure that the best possible image segmentation algorithm transitions into a real world recognition system, the segmentation algorithm must be properly evaluated. A novel approach is introduced for evaluating image segmentation algorithms. Part of the approach is to use a system of multiple measures. Using intra-algorithmic comparisons, three measures are tested on a small suite of segmented image test cases. The results from using three measures on the test suite demonstrate significant differences in the ability of the measures to distinguish segmentation quality. Another part of the novel approach is the use of an inter-algorithmic comparison which is shown to assist the evaluation of segmentation algorithms where exact edge truth is unknown such as is the case with Synthetic Aperture Radar (SAR) imagery. Results are demonstrated by using segmentation on a SAR target chip provided by the Moving and Stationary Target Acquisition and Recognition (MSTAR) program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) sensors are being developed with better resolution to improve target identification, but this improvement has a significant cost. Furthermore, higher resolution corresponds to more pixels per image and, consequently, more data to process. Here, the effect of resolution on a many class target identification problem is determined using high resolution SAR data with artificially reduced resolution, a Mean-Squared Error (MSE) criterion, and template matching. It is found each increase in resolution by a factor of two increases the average MSE between a target and possible confusers by five to ten percent. Interpolating SAR images in the spatial domain to obtain artificially higher resolution images results in an average MSE that is actually much worse than the original SAR images. Increasing resolution significantly improves target identification performance while interpolating low- resolution images degrades target identification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the large number of SAR images the Air Force generates and the dwindling number of available human analysts, automated methods must be developed. A key step towards automated SAR image analysis is image segmentation. There are many segmentation algorithms, but they have not been tested on a common set of images, and there are no standard test methods. This paper evaluates four SAR image segmentation algorithms by running them on a common set of data and objectively comparing them to each other and to human segmentations. This objective comparison uses a multi-measure approach with a set of master segmentations as ground truth. The measure results are compared to a Human Threshold, which defines the performance of human segmentors compared to the master segmentations. Also, methods that use the multi-measures to determine the best algorithm are developed. These methods show that of the four algorithms, Statistical Curve Evolution produces the best segmentations; however, none of the algorithms are superior to human segmentations. Thus, with the Human Threshold and Statistical Curve Evolution as benchmarks, this paper establishes a new and practical framework for testing SAR image segmentation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of the approaches to automatic target recognition (ATR) for synthetic aperture radar (SAR) images that have been proposed in the literature fall into one of two broad classes, those based on prediction of images from models (CAD or otherwise) of the targets and those based on templates describing typical received images which are often estimated from sample data. Systems utilizing model-based prediction typically synthesize an expected SAR image given some target class and pose and then search for the combination of class and pose which maximizes some match metric between the synthesized and observed images. This approach has the advantage of being robust with respect to target pose and articulation not previously encountered but does require detailed models of the targets of interest. On the other hand, template-based systems typically do not require detailed target models but instead store expected images for a range of targets and poses based on previous observations (training data) and then search for the template which most closely represents the observed image. We consider the design and use of probabilistic models for targets developed from training data which do not require CAD models of the targets but which can be used in a hypothesize-and-predict manner similar to other model-based approaches. The construction of such models requires the extraction from training data of functions which characterize the target radar cross section in terms of target class, pose, articulation, and other sources of variability. We demonstrate this approach using a conditionally Gaussian model for SAR image data and under that model develop the tools required to determine target models and to use those models to solve inference problems from an image of an unknown target. The conditionally Gaussian model is applied in a target-centered reference frame resulting in a probabilistic model on the surface of the target. The model is segmented based on the information content in regions of the target space. Modeling radar power variability and target positional uncertainty results in improved accuracy. Performance results are presented for both target classification and orientation estimation using the publicly available MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optical sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. Prior work that was reported at this conference has developed the theory of SAR invariants based on the radar scattering center concept and provided several examples of invariant configurations of SAR scatterers from measured and synthetic SAR image data. This paper will show that invariant scattering configurations can be extracted from predicted 3D data scatterer data and used to predict invariant features in measured SAR image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parametric approaches to problems of inference from observed data often rely on assumed probabilistic models for the data which may be based on knowledge of the physics of the data acquisition. Given a rich enough collection of sample data, the validity of those assumed models can be assessed in a statistical hypothesis testing framework using any of a number of goodness-of-fit tests developed over the last hundred years for this purpose. Such assessments can be used both to compare alternate models for observed data and to help determine the conditions under which a given model breaks down. We apply three such methods, the (chi) 2 test of Karl Pearson, Kolmogorov's goodness-of-fit test, and the D'Agostino-Pearson test for normality, to quantify how well the data fit various models for synthetic aperture radar (SAR) images. The results of these tests are used to compare a conditionally Gaussian model for complex-valued SAR pixel values, a conditionally log-normal model for SAR pixel magnitudes, and a conditionally normal model for SAR pixel quarter-power values. Sample data for these tests are drawn from the publicly released MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary contribution of this paper is to demonstrate the application of signature manifold methods on the MSTAR data. Three manifold estimation methods (FIR, FFT, and Kalman smoothing) are compared to a baseline algorithm, MSE. The preliminary results show the manifold methods perform just as well as the baseline algorithm and have the potential for increased performance. In addition, both GLRT and Bayes hypothesis test algorithms are demonstrated for all of the manifold estimation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition (ATR) is difficult in general, but especially with RADAR. However, the problem can be greatly simplified by using the 3-D reconstruction techniques presented at SPIE[Stuff] the previous 2 years. Now, instead of matching seemingly random signals in 1-D or 2-D, one must match scattering centers in 3-D. This method tracks scattering centers through an image collection sequence that would typically be used for SAR image formation. A major difference is that this approach naturally allows object motion (in fact the more the object moves, the better) and the resulting 'image' is a 3-D set of scattering centers scattering centers directly from synthetic data to build a database in anticipation of comparing the relative separability of these reconstructed scattering centers against more traditional approaches for doing ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a high level theoretical framework describing quantitatively the potential ability of a synthetic aperture or similar imaging radar to classify discrete military targets. Communications information theory is used to calculate the information conveyed by the image of a target from the values of image pixels relative to the non-deterministic fluctuations of those values. The classification problem being addressed is scoped by defining a set of target classes and calculating the degree of deterministic variability present within each class. The probability of correct classification is determined by setting the information conveyed by the image against the scope of the classification problem to be solved. The theory is validated against simulated target classification experiments. It is then shown how the theory may be applied at a detailed level to a specific target classification algorithm, and at a high level in algorithm-independent performance prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, synthetic aperture radars (SARs) have been used to detect man-made targets and to distinguish them from naturally occurring background. This paper continues development of a fundamental, physics-based approach to assessing the performance of SAR-based automatic target recognition (ATR) systems. A major thrust of this effort is to quantify the performance advantages that accrue when the recognition processor exploits the detailed signatures of the target's component reflectors, e.g., their specularity, their polarization properties, etc. Its purpose is to assess models developed from the electromagnetic scattering theory. New lower and upper bounds on the probability of correct classification (PCC) are developed for targets composed of a constellation of geometrically-simple reflectors. The performance discrepancy of a conventiional full-resolution processor with respect to an optimal whitening-filter processor is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Higher-level decisions for AiTR (aided target recognition) networks have been made so far in our community in an ad-hoc fashion. Higher level decisions in this context do not involve target recognition performance per se, but other inherent output measures of performance, e.g., expected response time, long-term electronic memory required to achieve a tolerable level of image losses. Those measures usually require the knowledge associated with the steady-state, stochastic behavior of the entire network, which in practice is mathematically intractable. Decisions requiring those and similar output measures will become very important as AiTR networks are permanently deployed to the field. To address this concern, I propose to model AiTR systems as an open stochastic-process network and to conduct Monte Carlo simulations based on this model to estimate steady state performances. To illustrate this method, I modeled as proposed a familiar operational scenario and an existing baseline AiTR system. Details of the stochastic model and its corresponding Monte-Carlo simulation results are discussed in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore a statistical view of radar imaging in which target reflectances are realizations of an underlying random process. For diffuse targets, this process is zero-mean complex Gaussian. The data consists of a realization of this process, observed through a linear transformation, and corrupted by additive noise. Image formation corresponds to estimating the elements of a diagonal covariance matrix. In general, maximum-likelihood estimates of these parameters cannot be computed in closed form. Snyder, O'Sullivan, and Miller proposed an expectation-maximization algorithm for computing these estimates iteratively. Straightforward implementations of the algorithm involve multiplication and inversion operations on extremely large matrices, which makes them computationally prohibitive. We present an implementation which exploits Strassen's recursive strategy for matrix multiplication and inversion, which may make the algorithm feasible for image sizes of interest in high-resolution radar applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.