PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12095, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prediction, Sequential, and Multi-modal Processing I
Determining accurate location with Synthetic Aperture Radar (SAR) images is hindered not only by the limited resolution of the image but also by the non-rectangular nature of the process that produces the image, and the presence of multiple returns from di§erent height objects at the same range. Three dimensional SAR volume measurements can di§erentiate di§erent height scatterers but imaging methods are limited by the measurement scenario. Interpolation of images and volumes can normalize the collected data but can add inaccuracy to pixel and voxel impulse response (IPR) and localization estimates. Regions of best achievable accuracy deÖne the imaging algorithm performance envelope. Ideal localization occurs at the estimation bound of modeled point scatterers, and is also indicated by pixel and voxel size or IPR main lobe width. The performance envelope of SAR measurement scenarios, radar parameters, and imaging and estimation algorithms can reÖne system parameters of accurate localization. Use of analogous Circular and Linear SAR (CSAR and LSAR) scenarios enables joint trade-o§ of áight path along with imaging parameters. An ideal spherical construct underlies joint CSAR and LSAR, two and three dimensional SAR scenarios. This construct also ties CSAR and LSAR imaging to spherical ray-trace creation of phase history. For di§erent scenarios deÖned through a spherical model of parameters, localization performance of the imaging envelope, IPR and estimation accuracy are examined. Along with sample size and ideal chirp waveform sampling, the azimuth and elevation aperture is varied to include ideal and sub-sampling, and extension to achieve un-aliased images and volumes. Methods expanding the envelope are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for episodic processing of SAR data over wide angles is extended to accommodate larger image formats, beyond the image chips demonstrated previously. Episodic processing of SAR imagery over wide angles permits exploration of RF scattering behavior that is observable over extended apertures. These behaviors include sparsity, persistence, scintillation, and angular dependence. Experiments with larger measured scenes are presented, and we demonstrate the impact of varied angular width for the measured aperture data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prediction, Sequential, and Multi-modal Processing II
Commonly, data exploitation for single sensors utilizes two-dimensional (2D) imagery. To best combine information from multiple sensing modalities, each with their own fundamental differences, we utilize sensor fusion to capture and leverage the inherent weaknesses from different sensing modalities. When fusing multiple sensor modalities together, this approach quickly becomes intractable as each sensor has unique projection planes and resolution. In this work, we present and analyze a data-driven approach for fusing multiple modalities by extracting data representations for each sensor into three-dimensional (3D) space, supporting sensor fusion natively in a common frame of reference. Photogrammetry and computer vision methods for recovering point clouds, such as structure from motion and multi-view stereo, from 2D electro-optical imagery has shown promising results. Additionally, 3D data representations can also be derived from interferometric synthetic aperture radar (IFSAR) and lidar sensors. We use point cloud representations for all three modalities, which allow us to leverage each sensing modality’s individual strengths and weaknesses. Given our data-driven focus, we emphasize fusing the point cloud data in controlled scenarios with known parameters. We also conduct an error analysis for each sensor modality based upon sensor position, resolution, and noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro-optical (EO) sensors and synthetic aperture radars (SAR) have long been used in automatic target recognition (ATR) classifiers. However, maintaining robust target ID performance with ATRs trained on data from a single EO or SAR sensor across a wide range of operating conditions is challenging due to the variability in the sun illumination and surface orientation of targets, respectively. Fusion of ATR classifiers trained on EO and SAR data can improve target ID performance. In this paper, we implement multiple fusion algorithms at the decision level and the feature level using classifiers trained on a standard convolutional neural network (CNN) architecture. Under favorable conditions – when the train and test data have the same operating conditions – many fusion algorithms offer significant gain at lower image resolution compared to a single sensor at higher resolution. However, as the operating conditions become more disjoint, only the most robust fusion techniques show significant performance gain. A feature-level fusion algorithm developed by researchers at Sandia National Laboratories and modified for use in this paper showed the most robustness to disjoint operating conditions. Moreover, this algorithm was robust to geometric angle variation and therefore could be used with sensors mounted on different platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving targets are often smeared in synthetic aperture radar (SAR) imagery beyond recognition. Recent analysis has yielded an ability to perform Arbitrary Rigid Object Motion Autofocus (AROMA) in order to perform automatic refocus of targets which exhibit arbitrary temporal profiles of target translation and rotation during the SAR collection interval. This investigation examines the efficacy of AROMA for targets with complex heading profiles amidst a background of measured Ku-band SAR imagery. It is found that AROMA is able to generate well focused imagery which corresponds with the underlying true structure of the target scattering shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present some of the latest advancements in the development of the Interface Launcher (iLauncher), along with the application of this technology to the development of plugins that support distributed PyTorch deep learning workflows across a diversity of computing resources including Amazon Web Services (AWS) GovCloud, distributed clusters of heterogeneous nodes with multiple graphics processing units (GPUs) per node running the Slurm batch queuing software, and Department of Defense (DoD) high performance computing (HPC) supercomputers running the Portable Batch Scheduling (PBS) software. The iLauncher technology automates the submission of HPC jobs and provides a mechanism for rapidly prototyping web interfaces from the user’s desktop to powerful capabilities running on the HPC nodes. We describe the extension of previous work to show the development of the client-side plugin JavaScript Object Notation (JSON) description, the underlying server-side scripts for running distributed PyTorch deep learning models on various platforms with different queuing systems, and the recipes for the software along with all dependencies in an all-inclusive software packaging technology called a container. Finally, we show a representative use case running distributed PyTorch in a Jupyter Notebook through iLauncher on the various backend platforms along with some guidance on when each one may be beneficial for a range of scenarios based on models and data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar is an all-weather sensor with many uses, including target recognition. We present our latest efforts to train a network on synthetic SAR imagery for good performance on measured images. We apply an eigenimage-based classification network to the SAMPLE dataset, a dataset of synthetic and measured SAR imagery. Eigenimages are extracted from the synthetic images, then used to encode both types of images. This encoding takes the form of a vector describing the weighted contribution of each eigenimage to a given image. This reduces the extraneous noise in the measured image and helps bridge the gap between the two domains. We train a variety of networks, including fully-connected, support vector machines, and logistic regression, on the weight vectors for synthetic images, then test on measured vectors. We present the results on the publicly available SAMPLE dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose methods to improve the use of synthetic data for characterizing SAR image exploitation in terms of various operating conditions (OCs). More specifically, we describe tools that simulate statistically relevant samples of SAR imagery and classify the resulting imagery via a baseline algorithm. The associated OC generation, user interfaces, databasing of OCs, classification, analysis, and visualization have been containerized and ported to run on the DoD Supercomputing Resource Centers. To demonstrate our workflow, we present four case studies with the quantized grayscale matching algorithm. The work described here provides a foundation to support future developments in multi-look and multi-sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many classifier problems are limited by the amount of data that is available to train the algorithm. This problem is exascerbated by deep learners who require large amounts of data. One approach to solving this problem is to use synthetic data to train the classifier; however, as is often the case, there are differences between synthetic and measured data that limit the performance of this approach. This effort baselines a fundamental approach to solving this problem by using a simple Siamese network to classify the target with one twin trained with abundant synthetic SAR data and the other twin trained with limited measured SAR data. The network is trained using the standard cross-entropy cost function and is the functional equivalent of a single network jointly trained by measured and synthetic data. The performance of this approach is characterized as a function of the amount of measured data required to train the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to their ability to capture images in a variety of environmental conditions, Synthetic Aperture Radars (SAR) are of particular interest in the automatic target recognition (ATR) domain. In order to develop SARATR machine learning (ML) algorithms, a large sample set indicative of the underlying population must be used. This is an issue since gathering SAR images, even for a single target, is an expensive and time consuming process. Recently a data set, known as the SAMPLE data set, consisting of synthetic SAR samples has been released. Ideally theses synthetic images can be used in place of real SAR samples. Unfortunately, training SAR-ATR ML algorithms with samples exclusively from the SAMPLE data set produces algorithms with poor performance on real SAR images. This paper is focused on creating new variants of cycle-consistent generative adversarial networks (CycleGAN) to produce a transformation function that maps a synthetic SAR image to a useful approximation of a real SAR image. By introducing a new feature correlation module to the cycle consistent GAN architecture we take the first steps in closing the gap between synthetic SAR images and measured SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method for classification of Synthetic Aperture Radar (SAR) data by combining ideas from graph-based learning and neural network methods within an active learning framework. Graph-based methods in machine learning are based on a similarity graph constructed from the data. When the data consists of raw images composed of scenes, extraneous information can make the classification task more difficult. In recent years, neural network methods have been shown to provide a promising framework for extracting patterns from SAR images. These methods, however, require ample training data to avoid overfitting. At the same time, such training data are often unavailable for applications of interest, such as automatic target recognition (ATR) and SAR data. We use a Convolutional Neural Network Variational Autoencoder (CNNVAE) to embed SAR data into a feature space, and then construct a similarity graph from the embedded data and apply graph-based semi-supervised learning techniques. The CNNVAE feature embedding and graph construction requires no labeled data, which reduces overfitting and improves the generalization performance of graph learning at low label rates. Furthermore, the method easily incorporates a human-in-the-loop for active learning in the data-labeling process. We present promising results and compare them to other standard machine learning methods on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset for ATR with small amounts of labeled data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.