PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12520, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural Radiance Fields (NeRF) is an emerging technique in the three-dimensional (3D) volumetric representation world due to its ability to learn 3D scenes from sparse two-dimensional (2D) imagery. However, the current implementation focus on electro-optical (EO) representations due to NeRF assumptions with lighting traveling through the scene is absorbed, which is analogous to EO sensor operation. In this work we present a framework for utilizing synthetic aperture radar (SAR) imagery in standard NeRF implementations. Because the physical scattering properties in SAR imagery are markedly different from EO images, we adapt the EO-based transform inputs to equivalent SAR-based parameters. We demonstrate our results on a sample measured SAR dataset with two different 3D SAR reconstruction techniques and demonstrate isotropic scatterer extraction on our sample target. Keyword
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a compressed sensing technique for leveraging prior electro-optic (EO) imagery to improve 3D synthetic aperture radar (SAR) imaging performance. Specifically, we build on existing iterative reconstruction algorithms by guiding the reconstruction process with a joint-sparsity regularization term that captures the complementary structural information shared between EO and SAR via a sparsifying transform in the 3D image domain. We demonstrate this approach using the wavelet transform, the non-uniform Fast-Fourier transform (NUFFT), and optimizers built on autograd utilizing the 2004 AFRL Gotcha SAR dataset, with complementary EO imagery collected from the 2013 Minor Area Motion Imagery (MAMI) collection and more recent (2016) satellite collections over the same area. Results indicate significant improvements in 2D and 3D imaging performance via incorporation of the cross-modality EO prior, which we attribute to the convex problem formulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The achievable performance of Synthetic Aperture Radar (SAR) localization and imaging is limited ultimately by that of the noise-free phase history data acquisition and imaging. Noise and motion errors (not investigated here) can only degrade from this performance. The range of parameters corresponding to good performance constitute an envelope of good performance. Point Scatterer location, accuracy, resolution and focus (IPR or Impulse Response) vary across the parameters of frequency, bandwidth, stand-o§ range, number of spatial (array) and temporal samples, and linear or circular áight path and their squint angle from broadside. Practical point source localization uses an iterative process to identify point source volume cell (voxel) centers and precisely locating these within the voxel by applying a damped exponential estimate of the ideal point source. Measurements between each dimension of a grid of point sources determines any distortion of the 3D image volume and IPR determines any change in focus over the grid points in the volume. IPR measurements are made with a search for the half-amplitude point of the main lobe width of the Point Scatterer response in each dimension. The performance of SAR parameters is measured for a spherical array model of level circular áight paths and their linearized alternatives. The phase history data are modeled with both single and multiple direct rays reáecting from idealized point sources or corner reáector as well as a spherical di§erential range model. Performance di§erences due to di§erent combinations of phase history data collection and imaging techniques are cataloged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an exploration of collection geometries for producing three-dimensionally (3D) focused synthetic aperture radar (SAR) derived point clouds. We consider collection geometries that can be produced by a series of continuous curves such as multiple flight paths of a fixed wing aircraft or multiple passes of a satellite orbiting the earth. As part of our analysis, we use sparse methods to reconstruct undersampled radar data. We use back-projection to focus the radar data into the spatial domain, onto a uniform volumetric grid. Additionally, we use a 3D resonance finding algorithm to extract scattering centers from volumetric radar data to produce 3D point clouds. Our analysis is based upon synthetic radar data produced using the parameters derived from our laboratory’s in-door turntable inverse synthetic radar aperture (ISAR) system. A key point of our analysis is to determine how many repeat passes are required to achieve a given fidelity of an object’s 3D representation. Analysis will include a comparison with interferometric methods, particularly with regard to the fidelity and the point cloud density. We use a digital model of a civilian pickup truck that has been validated for use in synthetic prediction, both as a full-size model in outdoor collects as well as a reduced scale model measured indoors in our lab. Future research directions are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Interface Launcher (iLauncher) technology automates the submission of HPC jobs and provides a mechanism for rapidly prototyping web interfaces from the user’s desktop to powerful capabilities running on back-end high performance computing (HPC) resources, including Amazon Web Services (AWS) GovCloud, distributed clusters of heterogeneous nodes with multiple graphics processing units (GPUs) per node running the Slurm batch queuing software, and Department of Defense (DoD) supercomputers running the Portable Batch Scheduling (PBS) software. We present some of the latest advancements in iLauncher plugin development, particularly in the use of channels to make deployment of plugins easier for groups of users. We also describe our latest plugin for using PostgreSQL with the TimescaleDB and PostGIS extensions in a Singularity container with the pgAdmin and Jupyter Notebook web interfaces for use on these HPC resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning is a technology applied to a host of problems in the decade since its introduction. Of particular interest for both defense and civil applications is the technology of automatic target recognition, which is a subset of visual detection and classification. However, these classification algorithms must be robust to out-of-library confusers and able to generalize across a variety of target types. In this paper, we augment the existing Synthetic and Measured Paired Labeled Experiment dataset of synthetic aperture radar images with the remainder of the public MSTAR dataset and define a set of experiments to encourage the development of traits beyond simple classification accuracy for target recognition algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While traditional Fourier methods of SAR imaging are well known in addition to being easy to implement, they have limitations in terms of quality, particularly with respect to speckle, scintillation, and side lobe artifacts. Methods of SAR imaging that have shown promise include superresolution methods like the Minimum Variance Method (MVM) and the Multiple Signal Classification (MUSIC) algorithm; however, these algorithms are computationally intense. Both algorithms require the estimation of a correlation matrix, and manipulations thereof, as well as computing the image spectrum through computation of a quadratic form for each image pixel. This paper presents an efficient method for estimating the correlation matrix and shows how the structure of the correlation matrix can be exploited to efficiently compute the aforementioned superresolution methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new measurement campaign for SAR images. The data consists of images collected by the Swedish LORA system associated with VHF-band (19-90 MHz). Due to the system frequency, detecting targets concealed in a forest is possible. Thus, this paper aims to share with the community the results of utilizing new VHF-band SAR data that allows the development of new methods for target and other change detection. In particular, to show the applicability of the new data set, a simple change detection method was performed to detect targets in a forest, resulting in 100% of detection, associated with no false alarm in a particular region of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent growth in the tasking and collection of Synthetic Aperture Radar (SAR) imagery, in particular commercial satellite availability, provides new opportunities for wide area change monitoring. Classic applications of change detection in SAR compare individual pixels, but as higher resolution imagery has become widely available there is an opportunity to leverage structural image content to produce more informative change identification. Deep learning techniques encompass the state of the art in identifying structure in imagery, but are notoriously data hungry. A recent body of research has grown around a technique called self-supervised representation learning (SSRL) to help minimize the need for handmade labels. We build on this research and train a model for use in SAR change detection. We leverage a SSRL approach known as contrastive learning, which encourages a deep learning model to identify salient image features through noise and other augmentations without an immediate need for hand engineered labels. The representation learned through this process can then be applied to other supervised, or unsupervised tasks, and we demonstrate the use of this learned embedding to identify change across SAR image pairs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active learning improves the performance of machine learning methods by judiciously selecting a limited number of unlabeled data points to query for labels, with the aim of maximally improving the underlying classifiers performance. Recent gains have been made using sequential active learning for synthetic aperture radar (SAR) data.1 In each iteration, sequential active learning selects a query set of size one while batch active learning selects a query set of multiple datapoints. While batch active learning methods exhibit greater efficiency, the challenge lies in maintaining model accuracy relative to sequential active learning methods. We developed a novel, two-part approach for batch active learning: Dijkstra’s Annulus Core-Set (DAC) for core-set generation and LocalMax for batch sampling. The batch active learning process that combines DAC and LocalMax achieves nearly identical accuracy as sequential active learning but is more efficient, proportional to the batch size. As an application, a pipeline is built based on transfer learning feature embedding, graph learning, DAC, and LocalMax to classify the FUSAR-Ship and OpenSARShip datasets. Our pipeline outperforms the state-of-the-art CNN-based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Employment of SAR imagery is used to conduct surveillance of maritime vessels. Moving signatures from a ship cause azimuthal smearing in acquired images that can be exploited by autofocusing algorithms to conduct moving target detection (MTD). This study highlights the use of the Arbitrary Rigid Object Motion Autofocus (AROMA) algorithm employed within a sliding window to achieve autonomous detection of moving vessels within a given scene. AROMA is an extension of Phase Gradient Autofocus (PGA), an algorithm traditionally used in Autofocusing based MTD. AROMA is a three-dimensional generalization of the 1D PGA, using a physical signal model of the target relative to the imaging radar to compensate for phase errors and generating refocused imagery as if the target were stationary. Comparing refocused imagery to original unfocused images can yield a sufficient increase in image sharpness, indicating the presence of a moving target. This approach samples patches within a SAR image by sliding a window across the scene in a raster pattern, testing for moving targets within each window using AROMA. Effective sliding window algorithms employ overlapping patches to improve complete coverage of targets of interest. This often results in redundant target identifications, with the algorithm selecting multiple windows with partial or complete imagery of the same target. A consolidation algorithm is employed to select for a single window correlating to the max magnitude sum value of each target scatter, thus eliminating repeated outputs. This study tests the detection and false alarm rates of the AROMA based MTD approach compared to traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advancements in the tasking and collection capabilities of SAR providers have reduced the spatiotemporal constraints on SAR-based change detection. As data constraints are relaxed, pairwise SAR-based change detection algorithms are rapidly becoming insufficient for summarizing change activity. To address this, we present two multi-temporal change detection algorithms that categorize, filter, and reduce the changes detected in any number of repeat pass SAR scenes: (1) Object Permanence change detection (OPcd); and (2) Activity Exclusion change detection (AEcd). When compared to traditional pairwise change detection methods, OPcd and AEcd allow for rapid digestion and efficient visualization of changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decision level fusion algorithms combine separate classification scores of a test sample to make a unified class declaration. The aim of decision level fusion algorithms is to achieve better classification performance by combining decisions rather than picking the single best performing algorithm. Multi-modal fusion can be achieved by fusing scores of deep learning models trained on different sensing modalities and tested on a target imaged from co-located sensors. Given differences in phenomenology, fusing EO sensors with SAR may boost performance when extended operating conditions are detrimental to the performance of one modality over the other. The EO modalities discussed in this work (VNIR and MWIR) are susceptible to the time of day while SAR is robust to time of day. Conversely, SAR returns of a target can vary greatly when aspect angle changes while EO modalities are relatively robust. This work analyzes the effectiveness of decision level fusion algorithms on MWIR, VNIR, and SAR modalities given disparate times of day and collection aspects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) imaging provides useful remote sensing capabilities because of its ability to image day-or-night and through clouds by using radar waves. However, understanding SAR vulnerabilities is important in developing data exploitation techniques that are resistant to “spoofing.” “Spoofing” is a type of attack where a virtual object is created in a SAR image by coherently adding the expected radar returns from a target into radar returns from the background. This research explored the effects of spoofing on Convolutional Neural Network (CNN) models for vehicle classification from the SAMPLE V2 data set. CNN models trained on SAR images with real targets in the scene were not able to accurately generalize to images with virtual targets in the scene; however, a model trained on real data identified spoofed images with an accuracy over 95.0% based on the confidence value outputs and a known proportion of spoofed images. Furthermore, a specialized training methodology enabled a CNN model to classify images as real or spoofed with an accuracy of more than 99.9% and classify the vehicle type with an accuracy of over 99.5%. This research determined the effects of real and spoofed SAR images on CNN models and what methods could be leverage to improve model performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition with synthetic aperture radar (SAR) data is a challenging problem due to the complexity of the images and the difficulty in acquiring labels. Recent work1 used a convolutional variational autoencoder to extract relevant features prior to constructing a similarity graph in a graph-based active learning framework for SAR data. In this work we present two novel methods for classifying SAR data that use convolutional neural network (CNN) feature extraction together with techniques from graph-based semi-supervised learning in an end-to-end manner that can provide improved classification performance in the small labeled dataset regimes that are common in SAR ATR. First, we introduce Laplace Output Activation Neural Networks (LOAN Networks) as a way of directly optimizing feature embeddings for use with graph-based semi-supervised learning techniques. Next, we introduce Pseudo Label Propagation Neural Networks (PsLaPN Networks) as a inexpensive way to both boost the training signal as well as combat overconfidence and poor model calibration in neural networks. We present a novel derivation of simple formulas for the direct and efficient computation of derivatives of the outputs of graph-based algorithms like label propagation2 for use in the training of our networks. We test the proposed end-to-end networks for active learning on OpenSARShip, a SAR dataset, where both methods surpass the previous state-of-the-art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the application of compressed sensing algorithms to the problem of novel view synthesis in synthetic aperture radar (SAR). We demonstrate the ability to generate new images of a SAR target from a sparse set of looks at said target, and we show that this can be used as a data augmentation technique for deep learning-based automatic target recognition (ATR). The newly synthesized views can be used both to enlarge the original, sparse training set, and in transfer learning as a source dataset for initial training of the network. The success of the approach is quantified by measuring ATR performance on the MSTAR dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition with synthetic aperture radar (SAR) data is a challenging image classification problem due to the difficulty in acquiring the large labeled training sets required for conventional deep learning methods. Recent work1 addressed this problem by utilizing powerful tools in graph-based semi-supervised learning and active learning, and achieved state of the art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset with less labeled data compared to existing techniques. A key part of the previous work was the use of unsupervised deep learning, in particular, a convolutional variational autoencoder, to embed the MSTAR images into a meaningful feature space prior to constructing the similarity graph. In this paper, we develop a contrastive SimCLR framework for feature extraction from MSTAR images by using data augmentations specific to SAR imagery. We show that our contrastive embedding results in improved performance over the variational autoencoder similarity graph method in automatic target recognition on the MSTAR dataset. We also perform a comparative study of the quality of the autoencoder and contrastive embeddings by training support vector machines (SVM) at various label rates, applying spectral clustering, and evaluating graph-cut energies, all of which show that the contrastive learning embedding is superior to the autoencoder embedding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) automatic target recognition (ATR) is a key technique for SAR image analysis in military activities. Accurate SAR ATR can promote command and decision-making. In this work, we propose a novel SAR ATR framework with human-in-the-loop. The framework consists of a Reinforcement Learning (RL) Agent, which is followed by a GNN-based classifier. The RL Agent is capable of learning from human feedback to identify the region of target (RoT) in the SAR image. The RoT is then used to construct the input graph for the GNN classifier to perform target classification. By learning from human feedback, the RL Agent can focus on the RoT and filter out irrelevant and distracting signals in the input SAR images. We evaluate the proposed framework on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The results show that incorporating human feedback can improve classification accuracy. By visualizing the results, we observe that the RL Agent can effectively reduce irrelevant SAR signals in the input SAR images after learning from human feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Canada’s RADARSAT-2 (R-2) low-resolution ScanSAR Detection of Vessels, Wide swath, Far incidence angle (DVWF) mode was implemented for vessel detection over wide swaths. Between 2013 and 2020, DVWF imagery was used operationally, by the Department of National Defence (DND), and exploited using a tool that detects clusters of bright pixels with vessel-like signature in the SAR imagery. The detected objects were then associated with automatic identification system (AIS) messages. Not all vessels transmit AIS messages, and not all detected vessels that transmit AIS are associated; therefore, analysts observed non-associated objects in an attempt to validate detected vessels. The new work described here evaluates the use of a convolutional neural network (CNN) that detects and rejects false alarms to assist analysts with this validation. A CNN is first calibrated with 1,963 DVWF images containing 16,490 detected and AIS associated vessels (referred to as SAR detected and AIS associated (SAR-AIS) vessels) and 5,654 known false alarms mostly off the coasts of Canada. The CNN’s usability is then evaluated, globally, with 94,562 DVWF images containing 209,746 SAR-AIS vessels, 203,559 validated vessels, and 261,499 unknown objects, detected operationally and non-operationally. During calibration, the CNN classified SAR-AIS vessels and false alarms with 91.6% and 96.3% accuracy, respectively. When evaluating usability, the CNN correctly classified 93.7% of the 209,746 SAR-AIS vessels. False alarms were determined for 5.4% of the 203,559 validated vessels, and 58.1% of the 261,499 unknown objects. These results suggest that a CNN designed to detect and reject false alarms could reduce the number of objects requiring validation by approximately 30%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Attempts to use synthetic data to augment measured data for improved synthetic aperture radar (SAR) automatic target recognition (ATR) performance have been hampered by domain mismatch between datasets. Past work which leveraged synthetic data in a transfer learning framework has been successful but was primarily focused on transferring generic SAR features. Recently SAMPLE, a paired synthetic and measured dataset was introduced to the SAR community, enabling demonstration of good ATR performance using 100% synthetic data. In this work, we examine how to leverage synthetic data and measured data to boost ATR using transfer learning. The synthetic dataset corresponds to the MSTAR 15o dataset. We demonstrate that high quality synthetic data can enhance ATR performance even when substantial measured data is available, and that synthetic data can reduce measured data requirements by over 50% while maintaining classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar is an all-weather sensor with many uses, including target recognition. We present work in train a network on synthetic SAR imagery for good performance on measured images. Previous work has used PCA decomposition to a dataset of synthetic and measured SAR imagery for image recognition with initially promising results. This work continues this line of research with kernel PCA using a number of kernels. These techniques are fit using synthetic SAR images, then the measured images are projected into the space at test time. Networks are trained on the lower dimension vectors from the synthetic images and tested on measured images. Performing dimensionality reduction in this way has applications for increased speed of network training and evaluation and in reducing the difference between synthetic and measured domains. We present the results on the publicly available SAMPLE dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of adequately training deep learning networks to be operational on measured Synthetic Aperture Radar (SAR) data when the quantity of measured data alone is insufficient. In particular, this is a study in transfer learning utilizing synthetically generated SAR data and measured SAR data to train a deep learning algorithm to classify military tactical vehicles. The present study is motivated by sparsity of measured data for Air Force targets of interest. Specifically, this effort builds on an existing Convolution Neural Network (CNN) architecture, i.e. Understanding the Synthetic and Measured GAP from the CNN Classifier Perspective and aims to improve achievable performance by increasing the algorithm complexity and performing parameter analysis on MSTAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correctly classifying SAR imagery is a critical task in many applications; however, unseen samples may be difficult for machine learning models to classify. Challenges include sensitivity to slight changes in target alignment and backgrounds. To overcome this, learning algorithms require large amounts of data taken under a variety of conditions. Since collection of measured data is time-consuming, costly, and sometimes impossible, researchers utilize synthetic data to diversify existing datasets. Unfortunately, arbitrary classification algorithms do not accurately handle the domain shift from synthetic to measured images. To eliminate this shift, the community has used GAN-based image-to-image approaches to transform the synthetic images to appear measured. Our experimentation indicates some baseline approaches fail when measured data is scarce, failing to preserve labels and/or capture the full measured distribution. To alleviate this, we design a novel discriminator that separates content from style. Using this discriminator in a GAN, we force the generation procedure to preserve content while altering style. By preserving content, we avoid label-flipping and mode-seeking caused by biases in available measured data. Through extensive experiments, we show how our method is able to predict the measured distribution for out-of-sample images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate classifications of air-to-ground targets of interest is extremely important. Measured data is expensive and difficult to gather for training deep learning networks. By creating synthetic images that can train deep learning networks to classify measured images, the effort and money needed for training deep learning networks for target classification is greatly reduced. This effort addresses a key technical challenge associated with training a deep learning network by augmenting a limited set of measured data with synthetic Synthetic Aperture Radar (SAR) data to train a deep learning network to classify military tactical vehicles. To account for the differences between synthetic and measured SAR data, this effort performs extensive data augmentation using synthetic data to create target and background variability. The goal is to create variability in a physically realistic way so that high classification performance is achieved when training with synthetic data. In addition, architecture modifications are also investigated to assess their contribution to performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.