PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We define local space and wave number quantities and apply them to wave propagation in dispersive media. An exact expression is obtained for the spreading of a propagating pulse. We show that while the pulse must eventually spread to infinity, it can contract for certain times. An exact criteria is given in terms of the covariance of the pulse. An exactly calculable example is presented and used to verify and illustrate the results. Also, we discuss the behavior of a transient at a given position. We propose that the local quantities, instantaneous frequency, group delay, and their respective local standard deviations are good characterizers and classifiers. We present some recent preliminary experimental results of Loughlin, Groutage, and Rohrbaugh showing the importance of instantaneous frequency and bandwidth for the characterization of transients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the scattering interaction of dolphin-emitted acoustic pulses ('clicks') with various elastic shells located, underwater, in front of the animal in a large test site in Kaneohe Bay, Hawaii. A carefully instrumented analog- to-digital system continuously captured the emitted clicks and also the returned, backscattered echoes. Using standard conditioning techniques and food reinforcers, the dolphin is taught to push an underwater paddle when the 'correct' target -- the one he has been trained to identify -- is presented to him. He communicates to us his consistently correct identifying choices in this manner. By means of several time- frequency distributions (TFD) of the Wigner-type, or Cohen class, we examine echoes returned by three types of cylindrical shells. The time-frequency distributions we compare in this survey are the pseudo-Wigner distribution (PWD), the Choi-Williams distribution (CWD), the adaptive spectrogram (AS), the cone-shaped distribution (CSD), the Gabor spectrogram (GS), and the spectrogram (SPEC). To be satisfactory for target identification purposes, a time- frequency representation of the echoes should display a sufficient amount of distinguishing features, and still be robust enough to suppress the interference of noise contained in the received signals. Both these properties in a time- frequency distribution depend on the distribution's capability of concentrating the featuers in time and frequency and of handling cross-term interference. With some time-frequency distributions there is a trade-off between the concentration of features and the suppression of cross-term interference. The results of our investigation serve the twofold purposes of (1) advancing the understanding of the amazing target identification capability of dolphins, and (2) to assist in assessing the possibility of identifying submerged targets using active sonar and a classifier based on target signatures in the combined time-frequency domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the backscattered echoes from selected targets that are extracted by an impulse radar system playing the role of a ground penetrating radar (GPR). The targets are metal and non-metal objects buried in dry sand to a selected depth. These echoes are studied in the joint time-frequency domain using a pseudo-Wigner distribution (PWD), which, in particular, makes it possible to analyze how each one of each target's signature features evolves in time. These distributions are viewed as the target signatures, and they are then used as templates for target classification. To be useful for target identification purposes, a signature representation should display a 'sufficient' amount of distinguishing features, yet be robust enough to suppress the interference of noise contained in the received signals. Multiple scattering between a target and the surface of the ground is another obstacle for successful target recognition that time-frequency distributions could counteract by unveiling the time progression of the returned target information. A classification method based on a fuzzy cluster estimation technique (the fuzzy C-means algorithm) is then used to reduce the number and kind of features in the templates. We put the classification algorithm to a test against validation data taken from an additional set of returned echoes. The same targets are used but they are illuminated with the GPR antennas at different positions. Class membership of a target is then decided using a simple metric. The results of our investigation serve to assess the possibility of identifying subsurface targets using a GPR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper revisits the applications of CFAR and morphological techniques to the problem of FLIR ATR. For many years, both morphology and CFAR approaches have been researched and tested in various automatic target recognition applications. However, detecting targets accurately and efficiently with a minimal false alarm presence continues to be a problem. The morphology based algorithm introduced in this paper employs closing and opening operations in parallel and subtracts the output from the original image to remove clutter that is larger than the target. The CFAR detector algorithm extracts targets by adaptively thresholding the input image at levels proportional to the local background statistics. The advantages and drawbacks of each technique are discussed, as well as the performance results on multiple databases. Experimental evaluations indicate that both algorithms perform well, even for low contrast targets and high clutter environments. These algorithms demonstrate improvement on a morphological multistage technique discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATR algorithm performance is clearly directly related to the signature and clutter on which it operates. This paper describes a practical signal to clutter measure used for comparing data set relative detection and recognition difficulty by making use of the target signature directly. Results are shown on a variety of data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VIGILANTE consists of two major components: (1) the viewing image/gimballed instrumentation laboratory (VIGIL) -- advanced infrared, visible, and ultraviolet sensors with appropriate optics and camera electronics; (2) the analog neural three- dimensional processing experiment (ANTE) -- a massively parallel, neural network-based, high-speed processor. The powerful combination of VIGIL and ANTE will provide real-time target recognition/tracking capability suitable for ballistic missile defense organization (BMDO) applications as well as a host of other civil and military uses. In this paper, we describe VIGILANTE and its application to typical automatic target recognition (ATR) applications (e.g., aircraft/missile detection, classification, and tracking), this includes a discussion of the VIGILANTE architecture with its unusual blend of experimental 3D electronic circuitry, custom design and commercial parallel processing components, as well as VIGILANTE's ability to handle a wide variety of algorithms which make extensive use of convolutions and neural networks. Our paper also presents examples and numerical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification of airborne or ground targets using high resolution radar (HRR) range-profiles is a notoriously difficult problem, due in large part to the extreme variability of the range-profile for small changes in target aspect angle. In this paper, we address the problem of joint tracking and recognition of a target using a sequence of HRR range-profiles within a likelihood-based framework. The likelihood function for the scene configuration combines a dynamics-based prior on the sequence of target orientations with a likelihood for range-profiles given the target orientation. The recognition system performs joint inference on the target type parameter and the sequence of target orientations at the observation times. The primary issue with respect to successful recognition is modeling of the HRR data. The use of either deterministic or stochastic models for the range-profiles is possible within our framework. A deterministic model and a conditionally Gaussian model for the range-profile are introduced, and the likelihood functions under each model for varying orientations and target types are compared. Fundamental limits on the performance of orientation estimators using HRR data are described in terms of a Hilbert- Schmidt bound on the estimation error. These bounds are computed to provide a comparison of the performance achievable using each model for the HRR data and for HRR sensors operating in different frequency bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for object recognition based on the trilinearity theorem, a theoretical result from projectile geometry obtained by Shashua. The trilinearity theorem relates, in a trilinear form, the pixel coordinates of a given object point visible in three images of an object under varying pose. In a preprocessing stage, the object of interest in every image is segmented from its background, and the background is removed. For such segmented images, our system achieves a high correct classification rate. Known objects are represented in the system by a database of images which show the objects as seen from several different viewing directions. In order to utilize the trilinearity theorem for the classification of an input image, it is necessary to construct several triples of closely matching image points -- one point in the input image and one each in two database images of a single object. The triple generation is accomplished by means of Gabor feature vector matching for selected feature points in the images. Using techniques from robust regression, the parameters in the trilinear forms are then determined, and a reprojection of feature points onto one of the three views is performed. The magnitude of the resulting match error then determines whether all three images show the same object, and hence whether recognition of the object in the input image is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A biologically inspired neural network (BINN) system for IR/LADAR object recognition is presented in this paper. The BINN system uses a local spatial frequency-based method to locate potential targets in a scene image. The potential targets are separated from background using a modified CORT-X boundary segmentation method and target classification is carried out by a multilayer perceptron based on the local spatial frequency features extracted from both IR and LADAR images. Because of the local spatial frequency features, CORT- X boundary segmentation method, and rich training sets used, the BINN system is insensitive to target background, brightness, contrast level, contrast reversal, and geometry relative to the sensor. The BINN system has been successfully tested on hundreds of pairs of real IR/LADAR images that contain multiple examples of military vehicles with different size and brightness/range in various background scenes and orientations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural network (ANN) algorithms are applicable in a variety of roles for image processing in infrared search and track (IRST) systems. Achieving a high throughput is a key objective in developing ANNs for processing large numbers of pixels at high frame rates. Previous work has investigated the use of a neural core supported by configurable logic to achieve a versatile technology applicable to a variety of systems. The implementation of multi-layer perceptron (MLP) ANNs, using field programmable gate array (FPGA) technology to ensure upgradability and reconfigurability, is the focus of this research. Approximations to the MLP algorithms are needed to ensure that a high throughput can be achieved with a sufficiently low gate count.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To give a real-time adaptive self-organizing capability to the automatic target recognition (ATR) system suppressing the over clustering, the modified adaptive resonance theory (mART) neural networks are proposed which include the vigilance test method of self-organizing map (SOM) and the real-time adaptive clustering algorithm of ART. This neural networks effectively cluster the arbitrary feature maps which are mostly invariant to two dimensional distortion, so as to solve the three dimensional distortion problem. As the extraction of features which are invariant to two dimensional distortion, five alternative methods are tested in this paper. And for the purpose of proving the performance of the proposed neural networks, some experiments with the database composed of 9 fighters and 5 tanks are carried out. Under the condition that the system occupies the same size of memory, the mART produces 19% higher recognition rate than that of the SOM neural networks. Consequently, it is proved that the proposed approaches can give a great attribution in realizing the three dimensional distortion invariant target recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Solution procedures for the traveling salesman problem (TSP), i.e. the problem of finding the minimum Hamiltonian circuit in a network of cities, can be divided into two categories: exact methods and approximate (or heuristic) methods. Since TSP is an NP hard problem, good heuristic approaches are of interest. The neural networks heuristic solutions of TSP was initiated by Hopfield and Tank. One such heuristic called the elastic net method is illustrated by the following, an imaginary rubber band is placed at the centroid of the distribution of n cities. Then some finite number (m greater than n) of points (nodes) on this rubber band changes their positions according to the dynamics of the method. Eventually they describe a tour around the cities. We express the dynamics and stability of the elastic net algorithm. We show that if a unique node is converging to a city, then the synaptic strength between them approaches one. Then we generalize to the case where more than one node converges to a city. Furthermore, a typical application that could make use of the elastic net method (e.g. multi-target tracking) will be pointed out for later studies. In order to verify the proof of the concept and the associated theorems, computer simulations were conducted for a reasonable number of cities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The general area of signal and image processing that focuses upon the detection and identification of military targets is known as automatic target recognition. This paper compares the impact of alternative wavelet processing techniques upon the performance of neural networks being used for target detection. In particular, the use of a filter whose coefficients are a linear combination of wavelet coefficients gave rise to an energy distribution in which targets were more detectable with fewer false alarms than when the same targets were sought in images whose data dimensionality was reduced using a conventional wavelet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NASA Search and Rescue Mission at Goddard Space Flight Center (GSFC) is carrying out a technology development project intended to complement the COSPAS-SARSAT satellite-based distress alerting and locating system. This system is based on emergency radio beacons and cannot function when beacons fail to operate. The beaconless search and rescue concept utilizes an airborne or spaceborne remote sensing instrument, such as a synthetic aperture radar (SAR), to aid in searching for downed aircraft in remote regions when no beacon is present. Compared with conventional visual search, a radar-based system would be capable of dramatically improving crash site detection due to its wide area coverage and foliage penetration. Moreover, the performance of this system is unaffected by weather conditions and ambient light level and hence it offers quick response time which is vital to the survival of crash victims. The Search and Rescue Mission has conducted a series of field experiments using the Jet Propulsion Laboratory's airborne SAR system (AIRSAR) which has demonstrated the technical feasibility of using SAR. The SAR data processing software (SARDPS) developed at GSFC is used to produce high-quality SAR images for post-processing and analysis. Currently various elements of an operational system are being investigated, including a SAR designed specifically to meet search and rescue needs, real-time or near-real time on-board SAR processing, and processing algorithms for advanced automatic crash site detection, image geo- rectification and map registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The background or clutter in SAR imagery has a significant stochastic component. It is often desirable to be able to rapidly characterize the clutter distribution and/or classify the background type based on the clutter distribution. We model the stochastic clutter as a piecewise stationary random field. The individual stationary subregions of homogeneity in the field can then be characterized by marginal density functions. This level of characterization is often sufficient for determination of clutter type on a local basis. We present a technique for the simultaneous characterization of the subregions of a random field based on semiparametric density estimation on the entire random field. This technique is based on a borrowed strength methodology that allows the use of observation from potentially dissimilar subregions to improve local density estimation and hence random process characterization. This approach is demonstrated on a set of NASA/JPL AIRSAR images, including an example of clutter dependent crash site detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarimetric NASA/JPL AIRSAR data is processed down to three complex quantities corresponding to the HH, HV, and VV polarizations when reciprocity is assumed. Phase angles are referenced to the HH phase angle resulting in five independent quantities. In the NASA SAR Search and Rescue Mission it is important to know how light plane crash sites may stand out from the background in either the original five-dimensional space or in some derived feature space. Parallel coordinate plots are an important tool for performing exploratory data analysis in higher dimensional spaces. In this paper we employ this technique to analyze both different clutter/background types and perform detailed analysis of particular crash sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) is uniquely suited to help solve the search and rescue problem since it can be utilized either day or night and through both dense fog or thick cloud cover. This paper describes the search and rescue data processing system (SARDPS) developed at Goddard Space Flight Center. SARDPS was developed for the Search and Rescue Mission Office in order to conduct research, development, and technology demonstration of SAR to quickly locate small aircraft which have crashed in remote areas. In order to effectively apply SAR to the detection of crashed aircraft several technical challenges needed to be overcome. These include full resolution SAR image formation using low frequency radar appropriate for foliage penetration, the application of autofocusing for SAR motion compensation in the processing system, and the development of sophisticated candidate crash site detection algorithms. In addition, the need to dispatch rescue teams to specific locations requires precise SAR image georectification and map registration techniques. The final end-to-end processing system allows for raw SAR phase history data to be quickly converted to georeferenced map/image products with candidate crash site locations identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the rudiments of a design and implementation approach that will produce low-cost and quick turnaround airborne synthetic aperture radar (SAR) systems including designs for remotely piloted vehicles (RPVs). The concept is based on strict adherence to a discipline of simplicity in application boundary definition, the corresponding design that follows, extension of this core of simplicity through the build and test cycle and continuation of this theme when system modification and upgrades are considered. As this paper points out, the tenets for low-cost development of SAR systems are not new. Indeed, several such developments validate the guidelines advocated in this paper. The crux of this end-to-end development simplicity is to minimize the functions assigned to the on-board radar systems, transferring them to less expensive ground-based information processing assets that will perform motion compensation, image signal processing and target identification/classification. This cause limitations in the applications sheath of the airborne system, but in many cases this is an acceptable compromise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of automatically locating targets in synthetic aperture radar (SAR) imagery has traditionally been done in the image or intensity domain. A fully polarimetric SAR provides additional information that can be used in connection with SAR processing for speckle reduction, without any degradation of SAR resolution, and in automatic target detection for classifying the source of a particular scattering signature in an image. The polarimetric information can also aid in locating targets which lack a strong intensity return. This paper presents enhanced imagery and automatic target detection results using data collected under the NASA Search and Rescue Mission Office at Goddard Space Flight Center (GSFC) by the NASA/JPL AirSAR radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Full polarization data gathered by the NASA/JPL AIRSAR P-band radar in Gilmore Creek, Alaska (1993), Half Moon Bay, California (1994), and Bishop, California (1995) are imaged using full polarization displays. Full polarization symbol displays of reflector signatures and of simulated small aircraft crash sites are presented. In addition, ideal polarimetric reflector responses are compared to measured reflector signatures extracted from ADTS MMW SAR scenes and AIRSAR scenes. It is demonstrated that full polarization displays can be used to quickly identify polarization calibration errors based on clutter responses alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in the areas of phase history processing, interferometry, and radargrammetric adjustment have made possible extremely accurate information extraction from synthetic aperture radar (SAR) image pairs by means of interferometric techniques. The potential gain in accuracy is significant since measurements can theoretically be determined to within a fraction of a wavelength (subcentimeter accuracy) as opposed to a fraction of pixel distance (meter accuracy). One promising application of interferometric SAR (IFSAR) is the use of coherent change detection (CCD) over large areas to locate downed aircraft. This application poses an additional challenge since IFSAR must be processed at longer wavelengths to achieve foliage penetration. In this paper a combination of advanced techniques is described for using airborne SAR imagery to carry out this mission. Performance parameters are derived, and some examples are given from actual data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the problem of finding downed aircraft in forested areas, the NASA Search and Rescue Project has developed a system based on using fully polarimetric L-Band and P-Band synthetic aperture radar (SAR). Because of the resolution of the sensor and the size of the target, single-look imagery must be used. The problem of focusing the SAR imagery is a difficult one, especially at P-Band. An approach has been implemented with considerable success, based on a variant of the phase gradient autofocus algorithm, which provides the higher-order-than- quadratic phase correction needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NASA sponsored Search and Rescue Synthetic Aperture Radar (SAR) program seeks to use foliage penetrating synthetic aperture radar (SAR) to locate light plane crashes in remote areas. In addition to the hardware and pattern recognition issues, data management is recognized as a significant part of the overall problem. A single NASA/JPL AIRSAR polarimetric image in P, L, and C bands takes approximately 524 megabytes of storage. Algorithmic development efforts, as well as an eventual operational system, will likely require maintaining a large database of SAR imagery, as well as derived features and associated geographical information. The need for this much data is driven in large part by the complexity of the detection problem. A simple classification/detection algorithm does not currently seem feasible. Rather, a data driven approach that can incorporate local background characteristics as well as geographical information seems to be called for. This in turn makes data management a key issue. This paper presents a comprehensive data management framework suitable for the SAR problem, as well as other similar massive data set management problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Research Laboratory (ARL) has been developing advanced acoustic array signal processing algorithms using small baseline arrays for detecting, performing direction finding, tracking, and classifying ground targets. In this paper, we discuss the wideband MUSIC algorithms and the real- time implementation of these algorithms in the ARL sensor testbed. Computational complexity issues and CPU platforms pertaining to the testbed are discussed. In addition, we present experimental results for multiple targets test runs showing the relative performance of the delay-sum and the incoherent wideband MUSIC algorithms versus ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared scenes are modeled as consisting of two kinds of targets: flexible 2-D models for simple shapes and rigid 3-D faceted models for detailed targets. The flexible models permit rapid saccadic detection of targets and accommodate 'clutter' objects not present in the target library. The rigid model library contains specific vehicles or other objects we wish to discriminate. A likelihood model based on sensor statistics is combined with a prior distribution on possible scenes to form a posterior distribution for Bayesian inference. Nuisance parameters associated with the radiant intensities of the background and object facets are adaptively estimated as the inference proceeds. A general metropolis- Hastings acceptance/rejection algorithm for sampling from the posterior distribution is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an enabling algorithm for automatic target detection wherein the targets are almost similar but differ in fine details. The translated and scaled target images are processed in Mellin transform domain and subsequently detected using a learning vector quantization (LVQ) neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a complete system for the recognition and localization of a 3D model from a sequence of monocular images with known motion. The originality of this system is twofold. First, it uses a purely 3D approach, starting from the 3D reconstruction of the scene and ending by the 3D matching of the model. Second, unlike most monocular systems, we do not use token tracking to match successive images. Rather, subpixel contour matching is used to recover more precisely complete 3D contours, yielding a denser and higher level representation of the scene. The reconstructed contours are fused along successive images to further increase the localization precision and the robustness of the 3D reconstruction. Finally, corners are extracted from the 3D contours and used to generate hypotheses of the model position in a hypothesize-and-verify algorithm. This algorithm yields a robust recognition and precise localization of the model in the scene. Results are presented on infrared image sequences with different resolutions, demonstrating the precision of the localization as well as the robustness and the low computational complexity of the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
(U) Future successful ballistic missile booster intercepts will require advanced automatic target detection, tracking, classification and identification (ADTCI) image processing techniques. Two such techniques are presented in this classified SECRET paper using the synthetic scene generator model (SSGM) in combination with the advanced systems (AVS) image processing package. Two challenging multispectral cases are treated: (1) missile hardbody occultation by the missile exhaust plume, and (2) variable plume/hardbody system (PHS) gradient intensities generated by missile tumbling due to exiting the sensible atmosphere. The target detection, tracking and edge extraction methods selected for this study include morphological, open-close operations within decision- level fusion for the obscuration case and pixel-level fusion for variable edge intensities. Other investigators have approached this issue on similar image processing techniques. The multispectral (2.69 - 2.95 micrometer SWIR; 4.17 - 4.2, 4.35 - 4.50 micrometer MWIR; and 8.0 - 12.0 micrometer LWIR) target/background imagery includes SWIRM/MWIR boost phase track (with occlusion problem) and LWIR aimpoint selection (with tumbling problem). The two classified missile systems are: (1) a depressed-angle submarine launched ballistic missile (SLBM) and (2) a medium range ballistic missile (MRBM). The results indicate that for 6 degrees of freedom (6 DOF) hardbodies, ATDCI geometrical pattern reference libraries should be optimized to accommodate the extreme variable gradient geometries for tumbling midcourse targets. For boost- phase missile hardbody occultation by missile exhaust plumes, segmentation and feature extraction should be implemented in each bandpass before processing to the ATDCI classifier. This study demonstrates that although the plume/hardbody system edges were extracted, the geometry of the target edge often deviated from symmetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric hashing provides a reliable and transformation independent representation of a target. The characterization of a target object is obtained by establishing a vector basis relative to a number of interest points unique to the target. The number of basis points required is a function of the dimensionality of the environment in which the technique is being used. This basis is used to encode the other points in the object constructing a highly general (transformation independent) representation of the target. The representation is invariant under both affine and geometric transformations of the target interest points. Once a representation of the target has been constructed a simple voting algorithm can be used to examine sets of interest points extracted from subsequent image in order to determine the possible presence and location of that target. Once an instance of the object has been located further computation can be undertaken to determine its scale, orientation, and deformation due to changes in the parameters related to the viewpoint. This information can be further analyzed to provide guidance. This paper discusses the complexity measures associated with task division and target image processing using geometric hashing. These measures are used to determine the areas which will most benefit from hardware assistance, and possible parallelism. These issues are discussed in the context of an architecture design, and a high speed (hardware assisted) geometric hashing approach to target recognition is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present paper addresses the problem of extracting features for classification purposes. A vector valued sample is to be classified to one of a number of classes with known distributions using the Bayes decision rule. The complexity of the classifier depends on the dimension of the vectors; thus it is of interest to keep this dimension as small as possible. One way to reduce the dimension is to apply a linear transformation on data. This transformation should be chosen so that no 'essential' information is lost. There are several suggestions on how this concept should be defined. We study a measure of class separability defined as the mean of all interclass Mahalanobis distances. The method to be presented, however, applies to all weighted quadratic distance measures. The validity of the proposed transformation is justified by applying the transformation to both Monte Carlo simulated data and to actual measured data. The measured data come from an impulse radar system with the purpose of classifying buried objects. The proposed transformation is shown to outperform the well known principal component analysis (PCA).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) algorithms have offered the promise of recognizing items of military importance over the past 20 years. It is the experience of the authors that greater ATR success would be possible if the ATR were used to 'aid' the human operator instead of automatically 'direct' the operator. ATRs have failed not due to their probability of detection versus false alarm rate, but to neglect of the human component. ATRs are designed to improve overall throughput by relieving the human operator of the need to perform repetitive tasks like scanning vast quantities of imagery for possible targets. ATRs are typically inserted prior to the operator and provide cues, which are then accepted or rejected. From our experience at three field exercises and a current operational deployment to the Bosnian theater, this is not the best way to get total system performance. The human operator makes decisions based on learning, history of past events, and surrounding contextual information. Loss of these factors by providing imagery, latent with symbolic cues on top of the original imagery, actually increases the workload of the operator. This paper covers the lessons learned from the field demonstrations and the operational deployment. The reconnaissance and intelligence community's primary use of an ATR should be to establish prioritized cues of potential targets for an operator to 'pull' from and to be able to 'send' targets identified by the operator for a 'second opinion.' The Army and Air Force are modifying their exploitation workstations over the next 18 months to use ATRs, which operate in this fashion. This will be the future architecture that ATRs for the reconnaissance and intelligence community should integrate into.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of weapons systems will benefit from an array of new technologies which, when integrated, will provide the capability of accurately selecting the correct target. For example, target image features can be extracted from high resolution satellite data and this information can be fused with feature positions obtained from a weapon's imaging sensor. This will allow automatic target recognition to be performed. Terrain aided localization using electro-optical sensing (TALEOS) is a robust method of enhancing the performance of an imaging system through the exploitation of other sources of information. The primary image processing technique used in TALEOS is model-matching. The objective of model-matching is to discover the 3D position and orientation of an object (the model) with respect to the sensor reference frame by performing a match with corresponding features. In TALEOS, the model is derived from remotely sensed data and contains information about potentially observable features which might be extracted from the image. Embedded in this extended model is information about specific targets, including their known or estimated position, and features which characterize them. The Sowerby Research Center terrain model facility was used to gather realistic imagery. The terrain model is a 300:1 scale model of a 25 square kilometer area of real terrain. An overhead gantry system carries a video camera over the model enabling a wide variety of flight scenarios to be simulated experimentally. By a combination of special paint schemes and video inversion, pictures of the terrain model can provide a realistic simulation of infrared imagery. An image database was simulated using an overhead view of the model as if seen from a 'satellite' or reconnaissance aircraft. This imagery was utilized to evaluate the performance of the TALEOS technique for comparison with theoretical results. TALEOS integrates the data from the image processing subsystem with data from a (modeled) inertial navigation system, using a Kalman filter, to generate the position of the sensor relative to the target. This paper describes TALEOS. The principles of these technologies are described and test results presented. Possible future developments are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A parallel algorithm of O(N) complexity, N number of compared pixels, for image matching, based on the dynamic programming principle, is addressed. Its new correlation (cost) function evaluates images similarity very precisely. Running time of the proposed parallel algorithm on a bi-SPARC 20/60 workstation is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Difficulties in performing density estimation in high dimensional space require dimensionality reduction in many automatic target recognition applications. This paper presents a method using a genetic algorithm to determine projections which are near optimal according to an arbitrary criteria function. An example is presented which optimizes a discriminant criteria in the use of a quadratic classifier to detect regions of interest for locating SCUD missiles in an unmanned aerial vehicle (UAV) imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new method for automatic target/object classification by using the optimum polarimetric radar signatures of the targets/objects of interest. The state-of-the-art in radar target recognition is based either on the use of single polarimetric pairs or on the four preset pairs of orthogonal polarimetric signatures. Due to these limitations polarimetric radar processing has been fruitful only in the area of target detection. It has not shown promise for improving target classification/recognition performance. The use of optimum polarimetric features for enhancing target recognition using synthetic aperture radar is explored in this paper. The polarization scattering matrix is used for the derivation of target signatures at arbitrary transmit and receive polarizations (arbitrary polarization inclination angles and ellipticity angles). Then an optimization criterion that minimizes the within class distance and maximizes the between class metrics is used for the derivation of optimum sets of polarimetric signatures. Then from sets of real fully polarimetric SAR imagery arbitrary polarization attributes are extracted. The performance of the automatic target detection and recognition algorithms using optimum sets of polarimetric signatures are derived and compared with those associated with the non-optimum signatures. The results show that noticeable improvements can be achieved by using the optimum over non- optimum signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is the nature of complex systems, composed of many interacting elements, that unanticipated phenomena develop. Computer simulation, in which the elements of a complex system are implemented as interacting software objects (actors), is an effective tool to study collective and emergent phenomena in complex systems. A new cognitive architecture is described for constructing simulation actors that can, like the intelligent elements they represent, adapt to unanticipated conditions. This cognitive architecture generates trial behaviors, estimates their fitness using an integral representation of the system, and has an internal apparatus for evolving a population of trial behaviors to changing environmental conditions. A specific simulation actor is developed to evaluate surveillance radar images of moving vehicles on battlefields. The vehicle cluster location, characterization and discrimination processes currently performed by intelligent human operators were implemented into a parameterized formation recognition process by using a newly developed family of 2D cluster filters. The mechanics of these cluster filters are described. Preliminary results are presented in which this GSM actor demonstrates the ability not only to recognize military formations under prescribed conditions, but to adapt its behavior to unanticipated conditions that develop in the complex simulated battlefield system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents new results on the tracking of ballistic missiles warheads using spatio-temporal wavelets. Here we focus our attention on handling more general classes of motion, such as acceleration. To accomplish this task the spatio-temporal wavelet transform is adapted to the motion parameters on a frame-by-frame basis. Three different energy densities, associated with velocity, location, and size, have been defined to determine motion parameters. We pointed out that maximizing these energy densities is equivalent to a minimum squared error estimation. Tracking results on synthetically generated image sequences demonstrate the capabilities of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking research has been of interest to several different groups of researchers from different perspectives. Perhaps, an event of greatest importance in the history and development of target tracking research is the new trend in the architectural revolution in current algorithms and techniques that are used for target tracking, i.e., the advent of neural networks and their applications to nonlinear dynamical systems. It has already been established in the literature that the mathematical complexity of the state of the art tracking algorithms have gone far beyond the computational power of the conventional digital processors. Since the introduction of Kalman filtering, several powerful mathematical tools have been added to target tracking techniques, e.g., probabilistic data association, correlation and gating, evidential reasoning, etc. All these methods have one thing in common, and that is, they track targets rather differently from the way nature does that. It is rather hard to come up with a sound mathematical proof and verification of the concept for different parallel distributed architectures that seem appropriate for a general class of target tracking applications. However, the volume of contributions within the last decade, in the application of various neural network architectures to different classes of target tracking scenarios can not simply be ignored. Therefore, the objective of this paper is to classify and address various neural network-based tracking algorithms that have been introduced since 1986 until now and discuss their common views as well as their differences in results and in architectures. It is also intended to address the role of mathematics in each of these algorithms and the extent that conventional methods are used in conjunction with the neural network-based techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is known that the performance of recognition by humans is improved by time integration. We have investigated this phenomenon in image sequences. The object hypotheses are detected in the image sequence utilizing object regions and motion in analogy to human perception. A multiple thresholding segmentation and a change detection process by wavelet transformation are used for detection. The detected segments are tracked over time. Our classification approach assumes, that the objects are recognized as a connected entity. Therefore a structural object description derived from the image sequence was developed. This description contains the geometric relations and shape features of the individual object parts as well as the motion behavior. Normalized difference measures of the structural descriptions have been derived for the classification. The differences are determined and combined by a fuzzy approach. The results have shown, that the classification can be improved and stabilized by the object description derived from the image sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acoustic response scattered by an object depends on its physical shape and structure and on its elastic properties. This paper addresses the exploitation of the information contained in the scattered waves and with the aim of providing a broader view of waveform analysis as applied to target detection and classification applications. The approach for this target recognition methodology is to reduce the multi- feature appearance obtained from time frequency analysis of the received sensor signals into wave packets associated with the important scattering mechanisms. This decomposition method has a wide range of signal processing applications, the particular application considered here is sonar systems. In the case of a sonar target, mechanisms such as specular reflections, creeping waves, chalice waves, Bragg waves and Bloch waves from different parts of target or scattering centers can be incorporated into the target characterization. By applying pattern recognition logic, the present study can serve as a useful background for new sonar system development with advanced processing techniques and state-of-the-art computer hardware. Examples from simulation and laboratory measurements are used to test the robust nature of the target's structural responses and scattering center distributions and to assess the capability of this target recognition scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of normal modes in fluctuation-based processing for passive shallow water acoustic detection of submerged sources is investigated. Simulations using a Pekeris waveguide have been done. The results indicate that fluctuations of the total acoustic field ar generally larger for near-surface noise sources, than for deeper ones. When individual modes are considered, the fluctuations are quite large when all noise sources are located near the ocean surface. But fluctuations are reduced when even a weak noise source is present at a deeper location. Fluctuations are always high in the small region in source depth where the modal function is close to zero. The submerged source indicator introduced by Wagstaff exhibit similar trends as the standard deviation of fluctuations, and accentuates those trends. Further research is suggested to learn the effects of using a finite number of hydrophones for modal projection. The effects of random ambient noise, and ambient noise sources distributed in a spatial continuum must also be investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the resonance or resonant scattering technique for detection and identification of underwater targets. Based on the resonance theory and the resonant scattering theory (RST), all underwater objects resonate at their natural frequencies when impinged by acoustic energy. These resonating natural frequencies appear as modulations on the frequency domain of the target echoes. Since these natural resonances are correlated and quasi-stationary signals originate from these underwater targets, the G-transform can efficiently be used for detecting the presence of a target by detecting the presence of some stationary signal. Furthermore, since these targets resonate at their natural frequencies based on target sizes, shapes and material compositions, the G-transform of these echoes can present these target resonances as unique signatures of each target. These unique signatures can then be used for target identification with trained neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the illumination of high resolution radar wave, a target can not be considered as a single dot target, it must be an extended target composed of many scatter dots. This paper mainly studies the model and the echo character of millimeter high resolution ground target. Because of the moving between target and system, Doppler frequency is important and useful information. An extend target is composed of many scatter dots. Each scatter dot, because of different position and angle, generates different Doppler frequency. Each kind of target has different geometry form. There are great distinction of the number and the position of scatter dots among targets. An extended target echo is a complicated Doppler modulation signal. The distinction of Doppler modulation echo of different target is very great. This property is very usable for target recognition. First, the echo spectral is computed by Fourier transform. Second, we choose the total spectral energy and four segment spectral energy as characters. Finally, target recognition adopted BP neural network to get high recognition ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the paper, we first present a general electromagnetic scattering model on the Doppler modulated features, i.e. the secondary modulation defined in the paper, resulting from the rotor rotation, propeller blades rotation and JEM. In addition, the airframe backscattering is given by dynamic facets modeling method for simulating the radar Doppler echoes. The processing of the measuring data shows the validity of the simulated radar echoes with the secondary modulation. A modified autocorrelation function is defined and used as the preliminary feature extraction of the secondary modulation, which is corresponding to the spectral analysis in frequency domain. Moreover, wavelet packets analysis performs the further feature extraction. Compared with 1D time domain feature extraction, 2D feature plane of wavelet packets decomposition show the higher correct recognition probability with the real-time classifier of fuzzy pattern comparator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unequally training set causes the low classification rate of a neural network recognizer. In order to equalize the training set, two methods are proposed in this paper. The first way controls the training parameters according to the property of training samples, i.e. adjusts the study rate with a fuzzy rule. The fuzzy rule is defined by the distribution of the training set and the important level of each kind of samples. The classification rate can be improved in this way and the fast convergence property can be achieved. The second means of equalizing the training set reduces the over- represented samples by fuzzy clustering and increases the deficient samples by interpolating. The BP neural network is used as recognizer here. From the results of the computer simulations, the two methods show to be effective when the training data are imbalance. The two ways improve the classification rate of neural network recognizer by equalizing the training set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new detection approach for small moving objects in noisy image sequences. This algorithm consists of pre-detection and post-detection. The pre-detection algorithm uses a multiple-median filter and an adaptive thresholding to suppress background clutter and enhance small targets. Where an estimate of the local clutter and noise are first done, the clutter and noise estimate are then subtracted from the primary image data to yield residuals that are potential targets. Finally the adaptive thresholding is used to turn residual images into binary images. Post-detection is performed on the binary image sequences. The only a priori information required in the post-detection technique is the maximum velocity of objects in sequence images. It uses the temporal continuity of the trajectories of moving targets to enhance the probability of detection and suppresses the probability of false alarm. Some results on two-dimensional infrared image sequences are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increased availability of wideband radars, high resolution in range which means target discrimination can be possible. In this paper, a new approach for target recognition is proposed and tested on backscattering returns of high range resolution (HRR) radar. The matrix pencil method is used to extract scattering centers from full-polarization multi- frequency scattering returns. Feature vectors are constructed using the concept of a transient polarization response (TPR). The classification of feature vectors is performed by the multiresolution neural network that provides some properties superior to back-propagation neural networks. The algorithm is applied to the recognition of five scaled models of war aircrafts at different signal-to-noise ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How to improve optical target tracking system's ability to reject clutter and noise is still a subject deserving intensive research. This paper presents a new target tracking method that exploits the subareas' resemblance between frames of image sequences. At proper sampling rate, the target's projections on the focus plane of the optical/IR sensor change slightly. By computing of the subareas of the images and searching of the location where the maximum correlation value occurs, we can estimate out the shifts of target position between frames. This is the way the traditional match filter tracker (MFT) works. In the new method, a mapping domain is generated from which we locate the targets' new positions by simply selecting a maximum value. Although more calculation power of the processor of the tracker is required, the new target tracking method is promising that it is more robust under strong clutter environment and allows the panning and rolling of the sensor (camera). A series of experiments is carried to compare the performance of the new target tracking method with that of MFT. The method to get the mapping domain is described in detail in this paper. Experiment results are also presented in this paper that demonstrates that the tracker compared with the traditional MFT are more robust when the scene is unstable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time system has been developed for automatic target detection and tracking in infrared image sequence. The system comprises the algorithm software and a small processor using for image processing. The algorithm includes these techniques: space filtering; background cancelling technique; target detection; acquisition and precise tracking. The leading features of the system are: (1) High speed in seeking targets. (2) High detection probability. (3) Sub-pixel level of tracking precision. (4) The capability of working reliably and effectively in complicated background. In this paper, the theory and architecture of the system are proposed and the testing results in various situations are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The propagation property of the laser polarization is one of the important properties in photonic signal and image processing technology and application. In this paper, the analysis on laser polarization through one transmission system or target is given. This analysis method is a new methodology which is briefer and clearer than other ways. With this method, some useful information about target and object may be obtained, which can be used in detecting, recognizing and tracking of target and object by means of real-time calculation and control of computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a computer vision system that hosts a suite of gazing algorithms for tracking oriented recognition (GATOR). The goal of the GATOR system is to accomplish robust automatic target recognition and tracking (ATRT) tasks at a video rate of 30 Hz. The uniqueness of GATOR is that it employs tightly structured, multiple, advanced ATR algorithms to progressively increase the confidence level during recognition, which enables a rugged performance and real-time processing in complicated battlefields. The biologically inspired GATOR system consists of three advanced image understanding algorithms: (1) a novel target wavelet filter to facilitate image registration, motion segmentation, and target tracking; (2) a morphological neural network (MNN) to provide target recognition and target list updating; and (3) a fuzzy logic data fusion scheme to integrate recognition results from multiple frames of images. In GATOR, these algorithms are optimally integrated at different stages of recognition and tracking, which maximally enhances the strengths of each algorithm. Initial testing of the individual algorithms has demonstrated the potential of the GATOR for battlefield applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we determine the feasibility of radar clutter identification in the timefrequency domain. The goal is to compare the performance oftime-frequency domain clutter recognition with other schemes that rely on the raw backscatter data. The time-frequency clutter features used for classification are extracted using the Wigner distribution of the analytical component of the returned radar signals. This approach is an attractive alternative to other clutter classification schemes because it makes use of the time-dependence of frequency domain signatures or the frequency dependence of time-domain scattering features. The disadvantage, however, is that time-frequency signatures are based on the magnitude squared of the received backscatter signal and are thus characterized by high noise variance. Another clutter classification scheme that relies on the higher order statistical features of radar clutter is presented. The performances of both clutter classification algorithms are compared in scenarios of additive white noise, colored noise, and alpha-stable noise.
Keywords: Time-frequency, clutter classification, Wigner-Ville, radar, cumulants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Angle of arrival estimation using beam-space MUSIC is implemented by nonuniformly subbanding the spatial spectrum of an antenna array using wavelet transform. The resulting spatial spectrum is thus composed of beam-space elements with dyadically proportional beamwidths. Wavelet based spatial filtering is compared with that of the DFT in the context of direction of arrival estimation (DOA) performance. Issues of sidelobe level and mainlobe resolution are addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.