Because of limitations in availability of synthetic aperture radar (SAR) training data, automatic target recognition (ATR) researchers have turned to the use of synthetic SAR images. Unfortunately, training neural network classifiers on this synthetic data does not yield robust models. Assuming access to limited measured SAR data, we evaluate two natural, transfer-learning approaches to solve this problem, showing that both do not successfully lead to solutions. Motivated by the successes of contrastive, representation, and metric learning, we propose a novel graph-based pretraining approach to transfer knowledge from synthetic samples to real-world scenarios. We show that this approach is applicable to three different neural network architectures obtaining improvements over the baseline approach of 19.21%, 28.70%, and 8.27% respectively. We also demonstrate that our method is robust to the choice of hyperparameters.
Correctly classifying SAR imagery is a critical task in many applications; however, unseen samples may be difficult for machine learning models to classify. Challenges include sensitivity to slight changes in target alignment and backgrounds. To overcome this, learning algorithms require large amounts of data taken under a variety of conditions. Since collection of measured data is time-consuming, costly, and sometimes impossible, researchers utilize synthetic data to diversify existing datasets. Unfortunately, arbitrary classification algorithms do not accurately handle the domain shift from synthetic to measured images. To eliminate this shift, the community has used GAN-based image-to-image approaches to transform the synthetic images to appear measured. Our experimentation indicates some baseline approaches fail when measured data is scarce, failing to preserve labels and/or capture the full measured distribution. To alleviate this, we design a novel discriminator that separates content from style. Using this discriminator in a GAN, we force the generation procedure to preserve content while altering style. By preserving content, we avoid label-flipping and mode-seeking caused by biases in available measured data. Through extensive experiments, we show how our method is able to predict the measured distribution for out-of-sample images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.