In the rapidly evolving landscape of military technology, the demand for autonomous vehicles (AVs) is increasing in both public and private sectors. These autonomous systems promise many benefits including enhanced efficiency, safety, and flexibility. To meet this demand, development of autonomous vehicles that are resilient and versatile are essential to the transport and reconnaissance market. The sensory perception of autonomous vehicles of any kind is paramount to their ability to navigate and localize in their environment. Typically, the sensors used for localization and mapping include LIDAR, IMU, GPS, and radar. Each of these has inherent weaknesses that must be accounted for in a robust system. This paper presents quantified results of simulated perturbations, artificial noise models, and other sensor challenges on autonomous vehicle platforms. The research aims to establish a foundation for robust autonomous systems, accounting for sensor limitations, environmental noise, and defense against nefarious attacks.
Safety and robust operation of autonomous vehicles is a pertinent concern for both defense and also civilian systems. From self-driving cars to autonomous Navy vessels and Army vehicles, malfunctions can have devastating consequences, including losses of life and infrastructure. Autonomous ground vehicles use a variety of sensors to image their environment: passive sensors such as RGB and thermal cameras, and active sensors such as RGBD, LIDAR, radar, and sonar. These sensors are either used alone or fused to accomplish the basic mobile autonomy tasks: obstacle avoidance, localization, mapping, and, subsequently, path-planning. In this paper, we will provide a qualitative and quantitative analysis of the limitations of ROS mapping algorithms when depth sensors—LIDAR and RGBD—are degraded or obscured, e.g., by dust, heavy rain, snow, or other noise. Aspects that will be investigated include form of the degradation and effect on the autonomous operations. This work will summarize the limitations of publicly available ROS mapping algorithms when depth sensors are degraded or obscured. Furthermore, this work will lay a foundation for developing robust autonomy algorithms that are resilient to possible degraded or obscured sensors.
Cybersecurity of autonomous vehicles is a pertinent concern both for defense and also civilian systems. From self-driving cars to autonomous Navy vessels, malfunctions can have devastating consequences, including losses of life and infrastructure. Autonomous ground vehicles use a variety of sensors to image their environment: passive sensors such as RGB(-D) and thermal cameras, and active sensors such as LIDAR, radar, and sonar. These sensors are either used alone or fused to accomplish the basic mobile autonomy tasks: obstacle avoidance, localization, mapping, and, subsequently, path-planning. In this paper, we will provide a qualitative and quantitative analysis of the effect of perturbed sensing capability of depth sensing, focusing on LIDAR, and the subsequent effects on navigation and path planning in the presence of obstacles. Aspects that will be investigated include complexity of the perturbation and effect on the autonomous operations. This work will lay a foundation for developing robust autonomy algorithms that are secure against possible degraded or inoperable sensors.
Buried targets pose a serious threat to modern soldiers and civilians alike, thus detecting them from a safe standoff distance is an important step in their remediation. Many successful vehicle-based detection systems have been designed to utilize forward-looking ground penetrating radar (FLGPR) for buried target detection at a distance, however, FLGPR has an inherently low signal-to-clutter-ratio (SCR) so its performance is limited. To address this limitation, suites of sensors have been added to some of these vehicle-based systems. In this work we utilize data from these various sensors to improve the buried target classification accuracy. Specifically, we present features extracted from FLGPR, lidar, and thermal- and visible-spectrum camera data, then fuse the various features using a kernel-based classifier. Our results indicate that fusing these multimodal features yields a higher classification performance than utilizing data from the FLGPR alone. We also analyze each sensor's incremental improvement of classification accuracy by performing numerous experiments with different permutations of the sensors.
KEYWORDS: Principal component analysis, Sensors, Explosives, Ground penetrating radar, Detection and tracking algorithms, General packet radio service, Radar, Land mines, Target detection, Signal to noise ratio
Explosive hazards are a deadly threat in modern conflicts; hence, detecting them before they cause injury or death is of paramount importance. One method of buried explosive hazard discovery relies on data collected from ground penetrating radar (GPR) sensors. Threat detection with downward looking GPR is challenging due to large returns from non-target objects and clutter. This leads to a large number of false alarms (FAs), and since the responses of clutter and targets can form very similar signatures, classifier design is not trivial. One approach to combat these issues uses robust principal component analysis (RPCA) to enhance target signatures while suppressing clutter and background responses, though there are many versions of RPCA. This work applies some of these RPCA techniques to GPR sensor data and evaluates their merit using the peak signal-to-clutter ratio (SCR) of the RPCA-processed B-scans. Experimental results on government furnished data show that while some of the RPCA methods yield similar results, there are indeed some methods that outperform others. Furthermore, we show that the computation time required by the different RPCA methods varies widely, and the selection of tuning parameters in the RPCA algorithms has a major effect on the peak SCR.
Buried explosives hazards are one of the many deadly threats facing our Soldiers, thus the U.S. Army is interested in the detection and neutralization of these hazards. One method of buried target detection uses forward-looking ground-penetrating radar (FLGPR), and it has grown in popularity due to its ability to detect buried targets at a standoff distance. FLGPR approaches often use machine learning techniques to improve the accuracy of detection. We investigate an approach to explosive hazard detection that exploits multi-instance features to discriminate between hazardous and non-hazardous returns in FLGPR data. One challenge this problem presents is a high number of clutter and non-target objects relative to the number of targets present. Our approach learns a bag of words model of the multi-instance signatures of potential targets and confuser objects in order to classify alarms as either targets or false alarms. We demonstrate our method on test data collected at a U.S. Army test site.
Explosive hazards are one of the most deadly threats in modern conflicts. The U.S. Army is interested in a reliable way to detect these hazards at range. A promising way of accomplishing this task is using a forward-looking ground-penetrating radar (FLGPR) system. Recently, the Army has been testing a system that utilizes both L-band and X-band radar arrays on a vehicle mounted platform. Using data from this system, we sought to improve the performance of a constant false-alarm-rate (CFAR) prescreener through the use of a deep belief network (DBN). DBNs have also been shown to perform exceptionally well at generalized anomaly detection. They combine unsupervised pre-training with supervised fine-tuning to generate low-dimensional representations of high-dimensional input data. We seek to take advantage of these two properties by training a DBN on the features of the CFAR prescreener’s false alarms (FAs) and then use that DBN to separate FAs from true positives. Our analysis shows that this method improves the detection statistics significantly. By training the DBN on a combination of image features, we were able to significantly increase the probability of detection while maintaining a nominal number of false alarms per square meter. Our research shows that DBNs are a good candidate for improving detection rates in FLGPR systems.
This paper explores the effectiveness of an anomaly detection algorithm for downward-looking ground penetrating radar (GPR) and electromagnetic inductance (EMI) data. Threat detection with GPR is challenged by high responses to non-target/clutter objects, leading to a large number of false alarms (FAs), and since the responses of target and clutter signatures are so similar, classifier design is not trivial. We suggest a method based on a Run Packing (RP) algorithm to fuse GPR and EMI data into a composite confidence map to improve detection as measured by the area-under-ROC (NAUC) metric. We examine the value of a multiple kernel learning (MKL) support vector machine (SVM) classifier using image features such as histogram of oriented gradients (HOG), local binary patterns (LBP), and local statistics. Experimental results on government furnished data show that use of our proposed fusion and classification methods improves the NAUC when compared with the results from individual sensors and a single kernel SVM classifier.
Explosive hazard detection and remediation is a pertinent area of interest for the U.S. Army. There are many types of detection methods that the Army has or is currently investigating, including ground-penetrating radar, thermal and visible spectrum cameras, acoustic arrays, laser vibrometers, etc. Since standoff range is an important characteristic for sensor performance, forward-looking ground-penetrating radar has been investigated for some time. Recently, the Army has begun testing a forward-looking system that combines L-band and X-band radar arrays. Our work focuses on developing imaging and detection methods for this sensor-fused system. In this paper, we investigate approaches that fuse L-band radar and X-band radar for explosive hazard detection and false alarm rejection. We use multiple kernel learning with support vector machines as the classification method and histogram of gradients (HOG) and local statistics as the main feature descriptors. We also perform preliminary testing on a context aware approach for detection. Results on government furnished data show that our false alarm rejection method improves area-under-ROC by up to 158%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.