PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Video image exploitation is an increasingly crucial component of battlefield surveillance systems. In order to address the present difficulties pertaining to video exploitation of tactical sensors, DARPA has developed the Airborne Video Surveillance (AVS) program. AVS will utilize Electro-Optical (EO) and Infrared (IR) video imagery similar to that available from current and future Unmanned Aerial Vehicle (UAV) systems. The AVS program will include the development, integration, and evaluation of technologies pertaining to precision video registration, multiple target surveillance, and automated activity monitoring into a system capable of real-time UAV video exploitation. When combined with existing EO and IR target recognition algorithms, AVS will provide the Warfighter with a comprehensive video battlespace awareness capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Navy has been requested to provide insightful responses to questions regarding low and high resolution target discrimination and target classification capabilities for short medium range ballistic missiles (SRBM/MRBM). Specific targets studied for this paper include foreign solid booster exhaust plume and hardbody systems (PHS). Target gradient edge intensities were extracted for aimpoint selection and will be added to the pattern referencing library database at NSWC Dahlgren Division. The results of this study indicate an increasing requirement for advanced image processing on the focal plane array of a LEAP (light exoatmospheric projectile) type kill kinetic vehicle (KKV) in order to implement effective correlation matching routines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VIGILANTE project is a planned vision system capable of tracking and recognizing targets in real time, on a small airborne platform. The project consists of two parts, (1) the Viewing Imager/Gimballed Instrumentation Laboratory (VIGIL), which is an infrared and visible sensor platform with appropriate optics and (2) the Analog Neural Three-dimensional processing Experiment (ANTE), a massive parallel, neural based, high-speed processor. The VIGIL senors are mounted on a helicopter. VIGIL consists of an Optical Bench containing a visible camera, a Long Wave Infrared (LWIR) camera and a two axes gyro stabilized gimbaled mirror. The helicopter is also equipped with Global Position System (GPS) and an Inertial Measurement Unit (IMU) for attitude and position determination and two video links for ground based image collection. Finally, a jet powered, radio controlled VIGILANTE Target Vehicle (VTV) has been manufactured and equipped with GPS. In the first stages of the project, the VIGIL system is mounted in a Hughes 500 helicopter and is used to acquire image sequences of the VTV for training and testing of the ANTE image recognition processor. Based on GPS and IMU input, the gimbal is pointed toward the VTV and acquires images. This paper describes the VIGIL system in detail. It discusses the overall approach for the first flight experiment, the results of the experiment and the follow-on experiments that demonstrate real-time target recognition and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report about a hierarchical design for extracting ship features and recognizing ships from SAR images, and which will eventually feed a multisensor data fusion system for airborne surveillance. The target is segmented from the image background using directional thresholding and region merging processes. Ship end-points are then identified through a ship centerline detection performed with a Hough transform. A ship length estimate is calculated assuming that the ship heading and/or the cross-range resolution are known. A high-level ship classification identifies whether the target belongs to Line (mainly combatant military ships) or Merchant ship categories. Category discrimination is based on the radar scatterers' distribution in 9 ship sections along the ship's range profile. A 3-layer neural network has been trained on simulated scatterers distributions and supervised by a rule- based expert system to perform this task. The NN 'smoothes out' the rules and the confidence levels on the category declaration. Line ship type (Frigate, Destroyer, Cruiser, Battleship, Aircraft Carrier) is then estimated using a Bayes classifier based on the ship length. Classifier performances using simulated images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser radar images differ generally from traditional contrast images in that they can be regarded as 3D geometric images. For this reason, methods for classification of man-made objects in laser radar images must be concerned with the 3D geometry of the objects but also with the uncertainties present in the images. These uncertainties are mainly due to deviations between registered and real positions, but also to missing data points. It will be demonstrated how the problem with the uncertainties in the images can be overcome in a technique that primarily is qualitative. The objects are first extracted from the image followed by a step in which their edges are determined. This representation is in a last step transformed into a formal qualitative representation used for classification of the objects through a matching process where the target objects are matched against objects in an object library. Finally, a way to calculate a possibility value that indicates our belief in the result of the match of a certain object is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For maximum likelihood and other parameter-based classifiers, it is wrong to assume that noise can be dealt with by removing the mean noise power from the combined signal and noise spectrum. Doing this takes no account of the variance of the noise power, leading to the assignation of very low probabilities to probable events and thus misclassification. Instead, the effect of the new noise level on the parameters of the probability density function should be calculated and these new parameters used in the probability calculations on the unadjusted signal and noise spectrum. Hence the effect of different noise levels may be robustly included in the classifier without the need to train the classifier at a number of different noise levels. This technique of adjusting the database of parameters is then compared to the standard method of manipulating the signal to be classified. This is done by comparing the noise adjustment algorithms' performance when they are included in a maximum-likelihood, radar range- profile ship classifier, which has 7 different classes. The performances of these algorithms are evaluated as a function of range and signal-to-noise ratio. The parameter-adjustment technique is shown to yield much better performance than the traditional signal-adjustment method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time curve tracking is an important topic in radar image processing and can be applied to automated ionogram scaling and remote target tracking. A 2D gray-scale image only contains line segments and curves, and we want to extract them. For straight line tracking, Hough transform can be applied easily. However, to extract an arbitrary curve, Hough transform requires more processing and data storage costs. According to the nature of the problem, this paper try to find each piece of curves and then link some pieces together to form a curve. A systematic method has been studied in this paper. This method includes the fuzzification of images, fuzzy segmentation, sub-curve search, and genetic algorithm linking. The fuzzification and fuzzy segmentation may not be applied if the original image is clear. the genetic algorithms are designed to link sub-curves in this system. In addition, a fuzzy neural network is proposed and implemented for tracking curves in sequential images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A target recognition system is described using 3-D mathematical models which simulate radar images. The simulated radar images are created from radar cross section (RCS) responses of the 3-D models and compared with measured target radar images. The 3-D models consist of several thousands facets, and one facet size is less than the radar resolution. An RCS response of each facet in the models is calculated by the modified geometrical theory of diffraction (GTD) method using the information of the radar frequency and the target aspect angle. The RCS response of each facet is projected onto the 2-D plane based on target aspect angle to create the final simulation radar images. The system is verified to be able to simulate even a ship radar imagery, in spite of the difficulty in the simulation due to its structural complexity. Evaluations were made for this recognition system by comparing the simulated ship images created from the 3-D models with the real ship images obtained by an airborne MITSUBISHI-SAR which has the capability of obtaining the X-band 1m resolution SAR and ISAR images, and the system has been proved to have the classification accuracy of better than 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and recognition of targets obscured by natural or simulated arboreal cover can be facilitated by combining multiple images of the target taken from different viewpoints or look angles. This technique, called multi-look imaging (MLI), has three principal advantages: (1) obscured target features can be revealed in one or more of the views; (2) fusion of information about a given target feature contained in multiple views can yield apparent noise reduction and increased feature resolution; (3) background features can be reduced via correlation and subtraction, thereby decreasing the target-to-clutter ratio. In Part 1 of this series of two paper, we discussed background and theory that support airborne MLI of ground targets. In this paper, we continue theory development with an analysis of geometric projection error inherent in target reconstruction. The effect of projection error derived from focal-plane quantization error during back-projection of a given look or view in model-based target reconstruction is of particular interest, due to our current research emphasis upon ray-projection based reconstruction algorithms. Additional error sources discussed in this paper include uneven coverage of looks, which can yield space-variant uncertainty associated with reconstruction of target features having low information content in relation to other target or background features. Simulation results are based on a calibrated model of airborne MLI, and are analyzed in terms of computational cost and accuracy of target reconstruction achievable via stereophotogrammetric methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the use of Petri nets to model the process of the object matching between an image and a model under different 2D geometric transformations. This transformation finds its applications in sensor-based robot control, flexible manufacturing system and industrial inspection, etc. A description approach for object structure is presented by its topological structure relation called Point-Line Relation Structure (PLRS). It has been shown how Petri nets can be used to model the matching process, and an optimal or near optimal matching can be obtained by tracking the reachability graph of the net. The experiment result shows that object can be successfully identified and located under 2D transformation such as translations, rotations, scale changes and distortions due to object occluded partially.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous work by the authors has studied the problem of searching for projections to low dimensional spaces which are optimal for classification based upon some training data set. In our earlier work, we relied upon using separate training and test data to attempt to estimate and correct the problem of overfitting. In this paper, we examine the problems introduced by using separate training and test data. We then propose a new approach based upon searching projections based upon maximizing the Mann-Whitney test statistic. We discuss efforts made towards calculating the exact distribution of the supremum of this statistic over the Grassmannian manifold of projections to R1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In response to a threat missile, an interceptor missile with a kinetic warhead (KW) is launched with the intention of intercepting and killing the lethal reentry vehicle (RV) in the exo-atmosphere before it reaches its target. Data from an IR sensor on-board the KW is to be used to discriminate the RV from the other pieces in the field of view. A time-delay neural network (TDNN) is proposed for discrimination. A TDNN was trained using simulated data, and tested using simulated and flight data. The flight data includes IR signatures for RVs, boosters, and thrust termination debris. The TDNN is able to distinguish RVs from other missile parts and debris. This paper describes the performance of a TDNN for discrimination in ballistic missile defense when tested using flight data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a model evolution methodology for object Recognition under dynamic perceptual conditions. The methodology consists of a Model Application, a Model Evolution, and a Reinforcement Learning. The model application is an approach to the recognition of objects within a sequence of images, which have been acquired under dynamic perceptual conditions. In this approach an RBF (Radial Basis Function)- based classifier is applied to classify/segment objects within each image. The model evolution is concerned with the modification of models, which are created off-line or continue to be updated on-line. The purpose of the model evolution is that these models can adapt to next incoming images. The model evolution is achieved with the help of the reinforcement learning, which is activated to generate information for model evolution, when it is needed to modify models according to perceived disparities between the models and reality. The methodology has been achieved through the development of an adaptive vision system, which consists of three main subsystems: Model Application system, Reinforcement Learning system, and Model Evolution system. They have been developed and integrated in a close-loop so that object models can evolve to recognize objects under variable perceptual conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the difficulties that has been apparent in applying image processing algorithms not just for automatic target recognition but also for associated tasks in image processing and understanding is that of the optimal choice of parameters and algorithms. Firstly we must select an algorithm to use and secondly the actual parameters that are required by that algorithm. It is also the case that using a chosen algorithm on a different image class yields results of a totally different quality, here we consider three image classes, namely infra-red linescan, dd5-Russian satellite and SPOT imagery. We are now exploring the use of genetic algorithms for the purpose of parameter and algorithm selection and will show how the approach can successfully obtain results which in the past have tended to be obtained somewhat heuristically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A radar target recognition system based on genetic minimization algorithms is proposed in this study. A pool of known binary encoded target responses is bred such that its fitness with the signatures of an unknown target with unknown azimuth position is maximized. Once the pool is stabilized, the identity of the unknown target is determined as the most dominant bit string in the pool. The performance of the proposed target recognition system measured in error rate versus signal to noise ratio is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive techniques for multi-target tracking have primarily been based on prior assumptions for the target and its background distribution. The statistical distribution theory, on the other hand, demands more complex mathematical modeling, which turns out to be computationally intensive as well. It is hard to deny the role of distribution theory and probabilistic approaches to the Multi-Target Tracking (MTT) particularly within the last two decades. However, despite the strength of statistical techniques and Bayesian approaches, the number of sensor samples for accurate modeling of current highly dynamical targets and their complex maneuvering capabilities require rather unrealistic assumptions about target dynamics. Practical target maneuvers with today's technology can be so short in duration that constant and uniform acceleration models for several samples may easily result in loss of tracks. This means the target can be undetected for many samples while making sharp turns. In recent years, there has been a paradigm shift toward fuzzy logic and neural network techniques. The membership functions of a fuzzy controller and nonlinear mapping capability of a trained neural network have made these two different technologies a viable combined system. The objective of this paper is to conduct a survey in the fuzzy logic technology as applied to target tracking and discuss its relation to neural networks when combined together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NASA Search and Rescue Mission was originated to develop a space-based emergency beacon detection system. The Sarsat system, along with its Russian counterpart -- Cospas, is highly successful and credited with the saving of over 8,000 lives worldwide during its 16 years of operation. Now, new techniques are emerging which may make it possible to locate downed aircraft wreckage from space without the need for a functioning emergency beacon. This paper reviews existing space and airborne systems and discusses the potential for space borne application of recent advances in techniques for interferometric SAR, coherent change detection, real time processing and polarimetric ATR to the search and rescue problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the SAR2 programs seven year history, a great deal of original research has been done in the area of automatic target detection for identifying aircraft crash site locations in synthetic aperture radar (SAR) imagery. The efforts have focused on using the polarimetric properties of the radar signal to both improve image quality and distinguish the crash sits from the natural background. A crash sites polarimetric 'signature' is expected to be present even in the absence of a strong intensity return. Several of these advanced methods are summarized and a methodology for their application described. Several detection results are presented using data from the NASA/JPL AirSAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key question in SAR-aided search is the relative utility of L-Band versus P-Band data. A study has been undertaken using target data collected by the NASA Search and Rescue Mission. Comparisons are made based on the ability to detect downed aircraft by use of several polarimetry-based automatic detection techniques developed by the NASA Search and Rescue Mission. Results obtained so far from this study are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principal purpose of the Beaconless Search and Rescue program at Goddard Space Flight Center (GSFC) is to utilize synthetic aperture radar (SAR) for the efficient and rapid location of recent small aircraft crashes. An additional side benefit might prove to be the detection and discovery of long lost or forgotten historic aircraft that have now become of immense value for museum display or among wealthy collectors. As the GSFC SAR2 program matures and its achievements in SAR target detection become more widely available, they will be of use to amateur and professional airplane hunters. We recommend that such ancillary benefits be kept in mind during the continued development and testing of such equipment, which would be of benefit to all future generations concerning the history of aviation. We welcome and encourage all participants to notify organizations such as ours of the discovery of any historic aircraft wreckage or intact abandoned old aircraft throughout the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wavenumber shift is an important tool in multiple pass synthetic aperture radar interferometry. In addition to overcoming baseline decorrelation, it has proven to have additional benefits. Chief among these is the ability to filter out much of the decorrelated signal, leaving the coherent portion. In the presence of foliage induced temporal decorrelation, this corresponds to filtering out much of the foliage return while strengthening any coherent ground return. We will examine this and other benefits of the wavenumber shift within the context of the Search and Rescue SAR program. An example based on ERS 1/2 data is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time critical search & rescue (s&r) operations often requires the detection of small objects in a vast area. While an airborne search can cover the area, no operational instrumental tools currently exist to actually replace the human operator. By producing the spectral signature of each pixel in a spatial image, multi- and hyper-spectral imaging (HSI) sensors provides a powerful capability for automated detection of subpixel size objects that are otherwise unresolved objects in conventional imagery. This property of HSI naturally lends itself to s&r operations. A lost hiker, skier, life raft adrift in the ocean, downed pilot or small aircraft wreckage targets, can be detected from relatively high altitude based on their unique spectral signatures. Moreover, the spectral information obtained allows the search craft to operate at substantially reduced spatial resolution thereby increasing scene coverage without a significant loss in detection sensitivity. The paper demonstrates the detection of objects as small as 1/10 of an image pixel from a sensor flying at over 6 km altitude. A subpixel object detection algorithm using HSI, based on local image statistics without reliance on spectral libraries is presented. The technique is amenable to fast signal processing and the requisite hardware can be built using inexpensive off the shelf technology. This makes HSI a highly attractive tool for real-time, autonomous instrument-based implementation. It can complement current visual-based s&r operations or emerging synthetic aperture radar sensors that are much more expensive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) is uniquely suited to help solve the Search and Rescue problem since it can be utilized either day or night and through both dense fog or thick cloud cover. Other papers in this session, and in this session in 1997, describe the various SAR image processing algorithms that are being developed and evaluated within the Search and Rescue Program. All of these approaches to using SAR data require substantial amounts of digital signal processing: for the SAR image formation, and possibly for the subsequent image processing. In recognition of the demanding processing that will be required for an operational Search and Rescue Data Processing System (SARDPS), NASA/Goddard Space Flight Center and NASA/Stennis Space Center are conducting a technology demonstration utilizing SHARC multi-chip modules from Boeing to perform SAR image formation processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most important parameter in Search and Rescue is the time it takes to locate the downed aircraft and rescue the survivors. The resulting requirement for wide-area coverage, fine resolution, and day-night all-weather operation dictates the use of a SAR sensor. The time urgency dictates a real-time or near real-time SAR processor. This paper presents alternative real-time architectures and gives the results of feasibility studies of the enabling technologies, including new work by the authors in the area of SAR data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we introduce the notion of a focused filter and discuss its application to the problem of target detection. A focused filter is a filter designed to give a maximum response to one pose of the target. This pose is called the focus of the filter. As the pose of the target deviates from the focus, the filter's responses should exhibit a graceful (and controlled) degradation. When presented with a test image, the responses of all focused filters are collected in a vector. This new vector will have a peak with the same shape as that used in designing one focused filter. Preliminary simulation experiments shows the robustness of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new correlation filter design method is introduced that uses multiple circular harmonic functions (CHFs) and allows us to optimally trade-off correlation plane while achieving desired correlation peak values in response to in-plane rotation of input images. We will refer to such filters as optimal trade- off circular harmonic function (OTCHF) filters. Underlying filter design steps and illustrative numerical results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in the design of correlation filters have made them an important tool for automatic target recognition (ATR). In particular, Distance Classifier Correlation filters (DCCFs) introduced by Mahalanobis et al. have proven attractive in ATR experiments. This success of DCCFs is mainly due to a methodology that is designed to provide good distortion tolerance and discrimination. Since DCCFs maximize 'distances' between classes, there is a natural tendency to misinterpret DCCFs as nothing more than another form of Fisher Linear Discriminant Functions (LDFs). The DCCF algorithm combines properties of a correlator with signal processing techniques and concepts to yield novel and efficient ways for recognizing targets in images. Although parallels exist between the DCCF algorithm and some well known concepts in pattern recognition such as the Fisher LDF, several important differences between the two should be clearly understood. Fundamentally, the Fisher LDF is a vector onto which feature vectors are projected whereas the DCCFs require quadratic distance calculations in a transformed space. Fisher LDFs are usually applied to features computed from an image whereas DCCFs are designed to work with images. These and other important differences between the two approaches lead to very different end results. We will clearly identify these differences and also point out the apparent similarities that lead to potential confusion between the two algorithms. A subset of the public release MSTAR data will be used for illustrative comparisons. This clarifying presentation is essential for a better understanding of DCCFs and a fair comparison with other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a general approach for the representation and recognition of 3D objects, as it applies to Automatic Target Recognition (ATR) tasks. The method is based on locally adaptive target segmentation, biologically motivated image processing and a novel view selection mechanism that develops 'visual filters' responsive to specific target classes to encode the complete viewing sphere with a small number of prototypical examples. The optimal set of visual filters is found via a cross-validation-like data reduction algorithm used to train banks of back propagation (BP) neural networks. Experimental results on synthetic and real-world imagery demonstrate the feasibility of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image grand tour is a method for visualizing multispectral or multiple registered images. In many settings, several registered images of the same scene are collected. This most often happens when multispectral images are collected, but may happen in other settings as well. A multispectral image can be viewed as an image in which each pixel has a multivariate vector attached. The desired goal is to combine the multivariate vector into a single value which may be rendered in gray scale as an image. One way of exploring multivariate data has been by means of the grand tour. The grand tour in a conventional sense is a continuous space-filling path through the set of two-dimensional planes. Data is then projected into the two-planes. Traditionally the data analyst views the grand tour until an interesting configuration of the data is viewed. In our image grand tour, the grand tour is a continuous space filling path through the set of one-planes, i.e. lines. The idea of the image grand tour is then to project the vector attached to each pixel into the one-dimensional space and render each as a gray-scale value. Thus we obtain a continuously changing gray scale image of the multispectral scene. As with conventional data analysis, we watch the scene until an interesting configuration of the image is seen. In this talk we will discuss some of the theory associated with one-dimensional grand tours. We illustrate this talk with multispectral (six bands) images of minefield, and illustrate how the grand tour can create linear combinations of the multispectral images which specifically highlight mines in a minefield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and identification of regions of interest in spatial or temporal data is a common concern in automatic target recognition. One approach to region of interest identification involves the use of spatial scan statistics. A difficulty arises due to competing concerns: small scan windows are required for potentially small targets while larger scan windows are necessary to improve the accuracy of the detector. When the scan statistics are mixture model density estimates, a borrowed strength profile likelihood approach can be shown to be superior to conventional likelihood estimators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Statistical smoothing methods are useful for finding important and nonobvious structure in data. However, some of the features discovered in this way can be spurious sampling artifacts. The SiZer approach (based on studying statistical SIgnificance of ZERo crossings of smoothed estimates) to analyzing which visible features represent important underlying structures, is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ship's classification by its magnetic signatures is of great importance in the development of magnetic sea mines. This work concerns the use of neural network classification system combined with the relevant features method to solve this problem. Alternatively we use genetic algorithm techniques to the train neural network. We compare both approaches in order to find the best characteristics of each one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new type of deformable model is presented that is able to combine some of the characteristics of both snakes and templates. It can be used to segment and recognize two dimensional objects when only vague prior knowledge about their shapes is available. A jump-diffusion process is used to fit the template to the image. The jumps allows the template to undergo abrupt discontinuous changes in shape and position and to decide among multiple target models. The diffusion process allows the template to perform continuous flowing deformations like a snake. A prior shape model is described that uses the local and global characteristics of each different target class. An efficient form for the image likelihood is given that extends to multiple attributes and multiple images. The jump transition kernel defines the probabilities of the template jumping to a new state. This is difficult to generate and sample in practice though. To allow for this a method is described where a marginal transition kernel is generated by integrating over the continuous internal parameters for subsets of jumps. This makes the sampling problem much easier while still providing effective inferencing. The relation of this approach to active contours and region competition is discussed. It is shown that with the appropriate choice of prior and likelihood that snakes can easily be modelled within the deterministic part of the diffusion process. The method is demonstrated with the detection of buildings and planes in infrared and optical images and a comparison with an active contour is also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three different linear transformations have been examined for their potential use as feature extractors for an automatic target recognition classifier. These transformations are based on a set of eigen targets, which are obtained through one of the following three methods: principal component analysis, the eigen separation transform, or the Fisher linear discriminant. From the sets of eigen targets obtained through each of the above methods, projection values of an input image are computed and fed to one or more multilayer perceptrons (MLPs) for training and testing purposes. With a fixed-structure MLP, each of the different eigen target sets are examined for their effects on the final recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a morphological filtering technique that can be used for clutter suppression or target feature extraction. By using a special ring-shaped structuring element, the proposed morphological filter can either preserve or eliminate objects in sensor imagery based on size and shape characteristics. The ring-shaped structuring element makes the filter invariant to target rotation. Thus, detection performance is substantially improved for cases where the target width information is either inaccurate or insufficient for successful differentiation between targets and image clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our pattern theoretic approach to automatic target recognition for infrared scenes combines structured and unstructured representations: rigid, 3-D faceted models for known targets of interest and flexible, simply connected shapes to accommodate the unknown 'clutterers' that the algorithm may encounter. The radiant intensities of both kinds of targets form nuisance variables which are incorporated into the parameter space. Statistical inference proceeds by simulating hypothesized scenes and comparing them to the collected data via a likelihood function. For a given target pose, we derive closed-form expressions for estimates of the thermodynamic variables via a weighted least-squares approximation. Since the number of objects in the scene, both rigid and flexible, is unknown and must be estimated, the parameter space is a union of subspaces of varying dimension. Without constraints on the model order, scene descriptions may become too complex. We apply Rissanen's minimum description length (MDL) principle, which offers a mathematical foundation for balancing the brevity of descriptions against their fidelity to the data. For continuous parameters, the description length involves the log-determinant of the empirical Fisher information matrix. The relationship of Rissanen's MDL to Schwartz's application of Laplace's method of integration and to the Cramer-Rao bound are discussed. Examples of likelihood surfaces and associated complexity penalties are given for synthetic tank data. In these experiments, the minimum description length approach correctly deduces the number of thermodynamic parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an empirical evaluation of a number of recently developed Automatic Target Recognition algorithms for Forward-Looking InfraRed (FLIR) imagery using a large database of real second-generation FLIR images. The algorithms evaluated are based on convolution neural networks (CNN), principal component analysis (PCA), linear discriminant analysis (LDA), learning vector quantization (LVQ), and modular neural networks (MNN). Two model-based algorithms, using Hausdorff metric based matching and geometric hashing, are also evaluated. A hierarchial pose estimation system using CNN plus either PCA or LDA, developed by the authors, is also evaluated using the same data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many different ATR algorithms have had their performance quantified for more than twenty years. These algorithms have been tested on data sets with significantly varying difficulty, however, the quantification of the data set difficulty has previously only been coarsely partitioned based on target operational state and the meteorological environment. Also, this has typically been done without mention of the correlation between the training and test set. In this paper we show the quantification of the signal to clutter measure (SCM) versus ATR performance, specifically the typical probabilities of detection (Pd), recognition (Pr) and false alarm rates directly. This SCM provides a basis for knowing what these performance statistics actually mean, since a 'good' or 'bad' set of performance numbers taken without quantified knowledge of the difficulty level of the data generally does not reflect the limitations or capabilities of the ATR algorithm(s) or provide an especially relevant basis for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research indicates that geometric hashing can provide a reliable and transformation independent representation of a target. The characterization of a target object is obtained by establishing a vector basis relative to a number of interest points unique to the target. The representation is invariant under both affine and geometric transformations of the target interest points. A previous paper discusses the complexity measures associated with task division and target image processing using geometric hashing. These measures were used to determine the areas which most benefit from implementation in hardware; and an architecture design for a high speed, hardware assisted, geometric hashing approach to target recognition was described. To confirm the performance of a parallel hardware approach to target recognition based on geometric hashing a complete component level simulation of the proposed hardware has been developed using OBJECTSIM. OBJECTSIM is an object oriented architecture modeling tool developed at La Trobe University for simulating concurrent systems. This paper describes the structure of the architecture and its simulation model in detail. Several hardware configurations are presented and their comparative performance evaluated for a suite of test images. The hardware performance specifications required for real time target recognition from video input are presented and discussed. The paper concludes that, on suitable images, real time target recognition can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes two applications of a model based target recognition approach that employs an efficient interpretation tree search algorithm for matching 3D model features to sensed features extracted from 2D imagery. The algorithm is tolerant to missing and incomplete features and makes optimal use of geometric constraints to greatly reduce search time while guaranteeing an optimal match. When a target is recognized, a minimum error estimate of its location is made. The algorithm requires that a rough estimate of the sensor view point be available prior to matching. Two applications of this algorithm are described, one for FLIR and one for SAR processing. For the FLIR application 3D models are constructed using linear space curves, quadratic space curves, and quadric surfaces. The features extracted from the imagery are straight line segments, representing the projection of linear space curves, and conic sections, representing the projection of quadratic space curves, and quadric surface limbs. For the SAR application the target models specify the 3D location of model scatterers, and the sensed features are the 2D locations of the returns detected in the SAR image. In the SAR application, the target was assumed to be mobile so special processing was necessary to handle the initial view point uncertainty. Experimental test results of these applications are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new multi-stage technique is presented for segmentation of targets of interest in synthetic aperture radar (SAR) data. The method creates an initial coarse segmentation using a histogram-based approach that labels each pixel as foreground or background. The extents of targets of interest are then determined using a hierarchical clustering stage that utilizes a novel weighting of intensity and pixel position. Finally, each potential target's segmentation is improved using probabilistic relaxation labeling. The approach loosens the typical region-based segmentation paradigm that only contiguous pixels can compose a segment. The technique is useful both for target segmentation and as a pre-processing step to verify the fidelity of artificially-generated data with real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification of ground control on photographs or images is usually carried out by a human operator, who uses his natural skills to make interpretations. In Digital Photogrammetry, which uses techniques of digital image processing, extraction of ground control can be automated by using an approach based on relational matching and a heuristic that uses the analytical relation between straight features of object space and its homologous in the image space. A build-in self-diagnosis is also used in this method. It is based on implementation of data snooping statistic test in the process of spatial resection using the Iterated Extended Kalman Filtering (IEKF). The aim of this paper is to present the basic principles of the proposed approach and results based on real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the effect of lossy, wavelet-based image compression on ATR performance using SAR imagery. Specifically, we use the Lincoln Laboratory designed, ATR discriminator to consider how ATR performance varies with compression ratio. We vary compression ratio using two variables: threshold value and quantization number. First, we consider the problem of optimizing performance through the choice of compression variables, given a selected feature set. Next, we consider the problem of selecting the feature set that optimizes performance, given a specified compression ratio. We find that when features are classified by type, their performance as a function of compression ratio can be generalized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a linear system approximation for automated analysis of passive, long-wave infrared (LWIR) imagery. The approach is based on the premise that for a time varying ambient temperature field, the ratio of object surface temperature to ambient temperature is independent of amplitude and is a function only of frequency. Thus, for any given material, it is possible to compute a complex transfer function in the frequency domain with real and imaginary parts that are indicative of the material type. Transfer functions for a finite set of ordered points on a hypothesized object create an invariant set for that object. This set of variates is then concatenated with another set of variates (obtained either from the same object or a different object) to form two random complex vectors. Statistical tests of affine independence between the two random vectors is facilitated by decomposing the generalized correlation matrix into canonical form and testing the hypothesis that the sample canonical correlations are all zero for a fixed probability of false alarm (PFA). In the case of joint Gaussian distributions, the statistical test is a maximum likelihood. Results are presented using real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensionality reduction is one way to reduce the computational load before analysis is attempted on massive high-dimensional data sets. It would be beneficial to have dimensionality reduction methods where the transformation can be updated recursively based on either known or partially identified data. This paper documents some of our recent work in dimensionality reduction that has applications to real-time automatic pattern recognition systems. Fisher's Linear Discriminant (FLD) is one method of reducing the dimensionality in pattern recognition applications where the covariances of each target group are the same. We develop two recursive versions of the FLD that are appropriate for the two-class case. The first is based on the assumption that it is known which class each new data point belongs to. This could be used with massive data sets where each observation is labeled with the true class and must be processed as it is obtained to build the classifiers. The other version recursively updates the FLD based on partially classified data. The FLD and other reduction methods such as principal component analysis offer global dimensionality reduction within the framework of linear algebra applied to covariance matrices. In this presentation, we describe local methods that use both mixture models and nearest neighbor calculations to construct local versions of these methods. These new versions for local dimensionality reduction provide increased classification accuracy in lower dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a hybrid method of using collected and computer generated signatures for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) algorithms. Currently, there is significant activity in developing both model-based (e.g. MSTAR) and collected template-based (e.g. STARLOS, SAIP) approaches. Each approach has significant strengths and weaknesses. The strength of the model-based approach is that it can finely sample the many competing hypotheses that describe the data under test. Collected template-based approaches have a strong sense of realism due to the fact that the templates were collected using actual deployed targets and SAR systems. A hybrid approach attempts to meld the strengths of both. We describe methodologies for determining which segments of a reference signature should be provided by collected data or models and how they should be combined. We also show that these new hybrid templates outperform either a model-only or collected template-only approach for target classification. ATR performance results are provided in the form of ROC curves. Also, some topics for future research are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated target recognition has benefited from cross- fertilization of development in related subdisciplines of image processing such as medical imaging. For example, the application of computerized tomography to synthetic aperture radar (SAR) imaging has produced 3-D reconstructions of ground targets on an experimental basis. In practice, by acquiring multiple views of a target (also called multi-look imaging -- MLI) that are subsequently merged mathematically, one can obtain reasonable approximations to higher-dimensional reconstructions of a target of interest. For example, multiple two-dimensional airborne images of ground objects can be merged via the Fourier transform (FT) to obtain one or more approximate three-dimensional object reconstructions. Additional methods of 3D model construction (e.g., from affine structure) present advantages of computational efficiency, but are sensitive to positioning errors. In this series of papers, analysis of MLI is presented that applies to various scenarios of nadir, near-nadir, or off-nadir viewing with a small or large number of narrow-or wide-angle views. A model of imaging through cover describes the visibility of a given target under various viewing conditions. The model can be perturbed to obtain theoretical and simulated predictions of target reconstruction error due to (1) geometric projection error, (2) focal-plane quantization error and camera noise, (3) possible sensor platform errors, and (4) coverage of looks. In this paper, an imaging model is presented that can facilitate prediction of limiting sensor geometry and view redundancy under various imaging constraints (e.g., target and cover geometry, available range of look angles, etc.). Study notation is a subset of image algebra, a rigorous, concise, computationally complete notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed at University of Florida over the past decade under the sponsorship of DARPA and the U.S. Air Force, and has been implemented on numerous sequential workstations and parallel processors. Hence, our algorithms are rigorous and widely portable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our work focuses on pose estimation of ground-based targets viewed via multiple sensors including forward-looking infrared radar (FLIR) systems and laser radar (LADAR) range imagers. Data from these two sensors are simulated using CAD models for the targets of interest in conjunction with Silicon Graphics workstations, the PRISM infrared simulation package, and the statistical model for LADAR described by Green Shapiro. Using a Bayesian estimation framework, we quantitatively examine both pose-dependent variations in performance, and the relative performance of the aforementioned sensors when their data is used separately or optimally fused together. Using the Hilbert-Schmidt norm as an error metric, the minimum mean squared error (MMSE) estimator is reviewed and its mean squared error (MSE) performance analysis is presented. Results of simulations are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the use of Continuous Wavelet Transform (CWT) features for detection of targets in low resolution FLIR imagery. We specifically use the CWT features corresponding to the integration of target features at all relevant scales and orientations. These features are combined with non-linear transformations (thresholding, enhancement, morphological operations). We compare our previous results using the Mexican hat wavelet with those obtained using the two types of directional wavelets: the Morlet wavelet and the Cauchy wavelets. The algorithm was tested on the TRIM2 data base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work we have developed a methodology for texture recognition and synthesis that estimates and exploits the dependencies across scale that occur within images. In this paper we discuss the application of this technique to synthetic aperture radar (SAR) vehicle classification. Our approach measures characteristic cross-scale dependencies in training imagery; targets are recognized when these characteristic dependencies are detected. We present classification results over a large public database containing SAR images of vehicles. Classification performance is compared to the Wright Patterson baseline classifier. These preliminary experiments indicate that this approach has sufficient discrimination power to perform target detection/classification in SAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concepts underlying two of the common classifier concepts used in multi-choice decision environments, namely the Bayes classifier and the piece-wise linear classifier, are combined in this study to define a piece-wise quadratic classifier. This results in decision surfaces that are complex combinations of the traditional quadratic surfaces defined by the Bayes classifier under the Gaussian assumptions, but would be applicable in environments wherein such Gaussian assumptions may not be truly valid. The paper describes the methodology in detail along with the specifics of the learning and classification algorithms. Experimental results based on standard data sets available in the literature and on the Internet, are included to illustrate the benefits and limitations of the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that any two densities can be related by an operator transformation. We use the method to obtain approximate densities when partial information is known. We apply it to the case of a propagating pulse and obtain an approximate intensity for an arbitrary time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many waves exhibit characteristics that depend on time and/or frequency. For example, the frequencies of a pulse propagating in a dispersive medium travel at different velocities. This kind of dependency gives rise to the need for joint time- frequency analysis and methods for describing the local temporal and spectral nature of waves, particularly pulses and transients. Useful concepts arising for this description from time-frequency theory are the average frequency at each time of a wave, and the spread about that average. These quantities are obtained as conditional moments of a time-frequency density (TFD). We explore the conditional variances of some common TFDs, and determine when these are positive (which isn't always the case). We also investigate local characteristics of a wave, in terms of these conditional moments, and show through experimental results that they robustly characterize the time-frequency behavior of transients and pulses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification is one of the main goals for processing echoes returned by various radar-illuminated targets. The conventional way to process radar echoes is to analyze them in the frequency domain or in the time domain. A third, more recent and seemingly more advantageous way is to use schemes in the combined time-frequency domain. We show the effect of using a pseudo-Wigner distribution (PWD) for processing bistatically scattered echoes from a few simple targets when these are illuminated by short, ultra-wideband pulses that an impulse radar could be designed to send and receive. The targets are spheres, either a perfectly conducting sphere or the same sphere coated with a thin dielectric layer. Two different, hypothetical materials are specified: a non- magnetic lossy dielectric and a dielectric that also has magnetic losses. For each target two different polarizations are selected and the corresponding bistatic radar cross- section (RCS) is displayed as a function of frequency and bistatic angle. Analyzing the scattered waveform when the designed short pulse is incident on each target for a few selected bistatic angles gives information of how and how much bistatically scattered pulses are affected by a target coating. Although reducing the bistatic RCSs of a target, the coating could introduce additional resonance features in the target's signatures, at low frequencies where the coating is not effective, that could be used for target recognition purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the backscattered echoes from selected targets that are extracted by an impulse radar system playing the role of a ground penetrating radar (GPR). The targets are metal and nonmetal objects buried to a selected depth in dry sand in an indoor sandbox. The recorded time-series data are analyzed in the joint time-frequency domain using a pseudo-Wigner distribution (PWD). These distributions with their extracted features in the two-dimensional time-frequency domain are viewed as the target signatures. To be useful for target identification purposes, a signature representation should display a 'sufficient' amount of distinguishing features, yet be robust enough to suppress the interference of noise contained in the received signals. Multiple scattering between a target and the surface of the ground is another obstacle for successful target recognition that time-frequency distributions could counteract by unveiling the time progression of the returned target information. We have previously demonstrated the merits of the PWD relative to various competing time-frequency distributions, in particular its capability of extracting a target's time-frequency signature when the target is buried at different depths. We have also used a classification method developed from the fuzzy C-means clustering technique to reduce the number and kind of features in the PWD signature templates. This is accomplished by converting the PWD signature into a point cluster representation and then reducing the cluster to a (smaller) number of cluster centers. This classification method has been further developed by associating a weight with each point in the cluster representation. We put the classification algorithm to a test against validation data taken from an additional set of returned echoes. The same targets are used but they are buried at a different location in the sand. Class membership of a target is then decided using a simple metric. The results of our investigation serve to assess the possibility of identifying subsurface targets using a GPR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic methods for observation of stationary and moving objects located behind a non-transparent screen, inside nontransparent construction or in opaque (turbulent) water are discussed. In the first, case, microwave holography is used. In particular, a stationary microwave facility or a holographic video camera with a picture frequency of 20 Hz is used. Defects inside various large concrete construction elements and protection lids were found. An image of a live dog behind a non-transparent wall was recorded. In the second case, ultrasound radiation was used. In particular, water surface disturbances caused by the radiation pressure of ultrasound waves passing across underwater objects were recorded. Two methods were created; one based on the effects of refraction and diffraction of a laser beam by water surface disturbances, and another based on the Talbot effect. It is shown that the first method can be used for detection of dynamic underwater objects crossing the sound beam. The Talbot effect method permits, besides detection, formation of images of these objects. Examples are shown of detection of live fish in opaque water and formation of images of small (approximately 1.5 cm2) motionless objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal recovery algorithms utilizing bispectral slices were implemented to extract CDMA (code division multiple access) signals from residual noise. It is assumed that any jamming or undesirable interference has been mitigated using signal separation techniques or space time processing algorithms prior to using bispectral slices. This paper proposes a methodology for removing remaining residual contamination using bispectral-based signal recovery techniques. More specifically, this paper uses phase recovery algorithms to recover a binary CDMA sequence embedded in noise. We compare three signal recovery algorithms and we discuss the utility of these techniques in CDMA communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.