Deep learning classifiers, particularly, Convolutional Neural Networks (CNNs), have been demonstrated to be very effective in the area of SAR automatic target recognition (ATR). Despite of this achievement, there is still a problem with proper classification of target objects from their speckled SAR imagery. In this paper, we address this technical challenge by implementing a two-step Hybrid Stacked Denoising Auto-Encoder (HSDAE) as an effective holistic denoiser and classifier model. Since there is no publically available comprehensive real or synthetic SAR dataset of aerial vehicles, we primarily employed the IRIS Electromagnetic modeling and simulation system to generate the required synthetic noisy SAR images from an array of test physics-based CAD models placed in different operating environments. Our generated test dataset contains synthetically generated SAR images of more than 300 aerial and ground vehicles. These images are systematically scanned from various azimuth and elevation angles as well as from different ranges and in different operating environments. They are regarded as the ground truth object radiation backscattering reflectivity map of test objects. Furthermore, these images are modulated with appropriate additive multiplicative noise to form speckled SAR images. Using a partial collection of ground-truth test vehicles images along with their corresponding speckled SAR images, we train a two-step concurrent denoising auto encoder followed by a CNN model to classify vehicles. Through the initial step, a denoising operation in performed and the test objects’ features like shape, size, and orientation attributes are recovered from any given input speckled SAR images. The output image from this denoising process is next passed as input to a CNN classifier for performing object recognition and classification. In this paper, we presented the architecture of HSDAE and its variants and compare their performances. Our results indicate the proposed HSDAE meets higher accuracy and repeatability for recognizing and classifying the target objects under different operating conditions.
Synthetic Aperture Radar (SAR) technology offers innovative remote sensing opportunity for the area of surveillance applications. However, for the Automatic Target Recognition (ATR) of aerial and ground vehicles from SAR data, there is a need for large-scale imagery of the target objects of interest (TOI’s) from different perspective viewing angles – that is rarely available publically. Such large datasets can be very instrumental for the initial training of deep learning classifiers as well as for the achievement of improved transfer learning. In this paper, we address this shortcoming by introducing IRIS Electromagnetic (EM) modeling and simulation system for virtual staging and automatic generation of realistic synthetic (i.e. simulated) multi0perspective SAR imagery of the test vehicles for the purpose of training of ATR classifiers. Primarily, we prepared a collection of 250 physics-based CAD models containing different aerial and ground vehicles objects. A fourstep process was implemented. In the first step, an optimized multi-path ray-tracing technique was developed for obtaining the synthetic EM radiation backscattering reflectivity patterns of the test objects. In the second step, we furnish the synthetically generated SAR images with different backgrounds (e.g. ground, grass, and asphalt) by employing appropriate noise modulation transfer functions. In the third step, we introduced a method for projecting directional test objects’ shadows from eight different perspective viewings. In the final step, the surface regions producing high-strength radiation backscatterings were highlighted to further enhance realism of the synthetically generated SAR images. To test and verify the validity and dependability of this proposed approach, we compared our simulated SAR imagery results against a number of comparable military and commercial vehicles from MSTAR dataset.
Identification and tracking of dynamic 3D objects from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we primarily present an approach for 3D objects recognition and tracking based on their multi-modality (e.g., SAR and IR) imagery signatures and discuss a multi-scale scheme for multi-modality imagery salient keypoint descriptors extraction from 3D objects. Next, we describe how to cluster local salient keypoints and model them as signature surface patch features suitable for object detection and recognition. During our supervised training phase, multiple views of test model are presented to the system where a set of multi-scale invariant surface features are extracted from each model and registered as the object’s class signature exemplar. These features are employed during the online recognition phase to generate recognition hypotheses. When each object of interest is verified and recognized, the object’s attributes are annotated semantically. The coded semantic annotations are then efficiently presented to a Hidden Markov Model (HMM) for spatiotemporal object state discovery and tracking. Through this process, corresponding features of same objects from multiple sequential multi-modality imagery data are realized and tracked overtime. The proposed algorithm was tested using IRIS simulation model where two test scenarios were constructed. One scenario is used for activity recognition of ground-based vehicles, and the other one is used for classification of Unmanned Aerial Vehicles (UAV’s). In both scenarios, synthetic SAR and IR imagery are generated using IRIS simulation model for the purpose of training and testing of newly developed algorithms. Experimental results show that our algorithms offer significant efficiency and effectiveness.
Detection and recognition of 3D objects and their motion characteristics from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present an efficient technique for generation of static and dynamic synthetic SAR and IR imagery data in the cluttered virtual environments. Such imagery data sets closely represent the view of physical environment as potentially can be perceived by the physical SAR and IR imaging systems respectively. In this work, we present IRIS simulation model for the efficient construction and modeling of virtual environment with clutter and discuss our techniques for low-poly 3D object surface patch generation. Furthermore, we present several test scenarios based on which the synthetic SAR and IR imaging data sets are obtained and discuss the role of key control parameters impacting the performance of our synthetic multi-modality imaging systems. Lastly, we describe a method for multi-scale feature extraction from 3D objects based on synthetic SAR and IR imagery data sets for a variety of test ground-based and aerial-based vehicles and demonstrate efficiency and effectiveness of this approach in different test scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.