Deep convolutional neural networks (CNNs) have proven to be successful for learning task-specific features that achieve state-of-the-art performance on many computer vision tasks. For object detection applications, the introduction of region-based CNNs (R-CNNs), and its successors, Fast R-CNN and Faster R-CNN, has produced relatively high accuracies and run-time efficient results. With Faster R-CNN, a region proposal network (RPN) is employed to share convolutional layers for both object proposals and detection with no loss in accuracy. However, these approaches are trained in a fully supervised manner, where a large number of samples for individual object classes are required, and classes are pre-determined by manual annotation. Large-scale supervision leads to limitations in utility for many real-world applications, including those involving difficult-to-detect, small, and sparse target objects in variable environments. Alternatively, exemplar learning is a paradigm for discovering visual similarities in an unsupervised fashion from potentially very small numbers of examples. Surrogate classes or outliers are discovered via the inherent empirical characteristics of the objects themselves. In this work, we merge the strengths of CNN structures with pre-processing steps borrowed from exemplar learning. We employ a semi-supervised approach that combines the ability to use generically-learned class-relatedness with CNN-based detectors. We train and test the approach on a set of aerial imagery generated from unmanned aircraft systems (UAS) for challenging real-world, small object detection tasks.
Unmanned aircraft systems (UAS) have gained utility in the Navy for many purposes, including facility needs, security, and intelligence, surveillance, and reconnaissance (ISR). UAS surveys can be employed in place of personnel to reduce safety risks, but they generate significant quantities of data that often require manual review. Research and development of automated methods to identify targets of interest in this type of imagery data can provide multiple benefits, including increasing efficiency, decreasing cost, and potentially saving lives through identification of hazards or threats. This paper presents a methodology to efficiently and effectively identify cryptic target objects from UAS imagery. The approach involves flight and processing of airborne imagery in low-light conditions to find low-profile objects (i.e., birds) in beach and desert-like environments. The object classification algorithms combat the low-light conditions and low-profile nature of the objects of interest using cascading models and a tailored deep convolutional neural network (CNN) architecture. Models were able to identify and count endangered birds (California least terns) and nesting sites on beaches from UAS survey data, achieving negative/positive classification accuracies from candidate images upwards of 97% and an f1 score for detection of 0:837.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.