PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11748, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous Systems: Sensing and Control Techniques
One popular means of aerial localization and navigation in GPS-denied environments is visual terrain relative navigation or also called geo-registration. Terrain relative navigation involves performing image registration with sensed aerial camera imagery and georeferenced satellite maps to produce the geographic translation and rotation of the camera. One popular terrain relative navigation technique depends on matching feature descriptors. These features, however, are intolerant to major changes in perspective, light, vegetation, season, and other scene changes. In these cases, they produce excessive amounts of false matches. Alternatively, image correlation can be used for registering a sensed image to a reference image but is extremely intolerant to perspective differences for 6 degree of freedom (6DOF) camera systems. This research explores the use of a combination of corners detection and normalized cross correlation for aerial vehicles at different altitudes. New methods for using dynamic search windows within reference satellite imagery is explored to constrain the pose estimation and increase image matching accuracy. The algorithms are tested with both simulated aerial imagery and experimentally sensed imagery captured with rigid mounted cameras. The algorithm is evaluated on its successful match rate and pose estimation error compared to truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores a multimodal deep learning network based on SqueezeSeg. We extend the standard SqueezeSeg architecture to enable camera and lidar fusion. The sensor processing method is termed pixel-block point-cloud fusion. Using co-registered camera and lidar sensors, the input section of the proposed network creates a feature vector by extracting information from a block of RGB pixels from each point in the point-cloud that is also in the camera’s field of view. Essentially, each lidar point is paired with neighboring RGB data so the feature extractor has more meaningful information from the image. This fusion method adds rich information on object color and texture from the camera data to enhance the overall performance. The image pixel blocks will not only add color information to the lidar data, but it will also add information about texture. The proposed pixel-block point-cloud fusion method yields better results than single-pixel fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most LiDAR point cloud processing techniques continue to gather more data as the data is available. This is also typical in most imaging systems, especially visible light camera systems. We propose a computationally efficient solution where data only continues to be processed if the data has changed. Once points are received by the LiDAR hardware driver, a sensor frame spatial event filter is used to compare a previous point with the most recent point obtained from that same coordinate in the LiDAR's receptor array. The output of the event filter then fills an array of events, or event map, that will be accessible by a layer of neurons that can be implemented in a GPU. The operations per point are compared between this event-based solution and other similar solutions. We show the event-based solution's efficiency can be better, according to how much the scene is changing and how many post-processing steps are involved. Point cloud data is collected from a LiDAR mounted on a vehicle driving in paved road conditions to illustrate the concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work targets the problem of drivable path detection in poor weather conditions including on snow covered roads. A successful drivable path detection algorithm is vital for safe autonomous driving of passenger cars. Poor weather conditions degrade vehicle perception systems, including cameras, radar, and laser ranging. Convolutional Neural Network (CNN) based multi-modal sensor fusion is applied to path detection. A multi-stream encoder-decoder network that fuses camera, LiDAR, and Radar data is presented here in order to overcome the asymmetrical degradation of sensors by complementing their measurements. The model was trained and tested using a manually labeled subset from the DENSE dataset. Multiple metrics were used to assess the model performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive cruise control (ACC), a common feature in an autonomous vehicle, is intended to automatically adjust the vehicle speed and maintain a safe distance from its preceding vehicle to avoid a collision. The main challenge is to filter the sensor data accurately, and the control system can make a decision quickly. This paper proposed a control method for ACC using the Extended Kalman filter (EKF) and a Proportional Integral Derivative (PID) controller, which can estimate the acceleration or braking of the preceding vehicle by adjusting the speed of the following vehicle. The proposed control method is assessed under various PID parameters using a Genetic Algorithm (GA) to optimize the ACC system using four loss metrics: (1) throttle loss, which accounts for fuel usage, and is proportional to the throttle setting; (2)ride quality, which is penalized by an excessive jerk (the first derivative of acceleration); (3) a distance penalty, which measures how far compared to the safe distance
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous Ground Vehicles: Joint Session with Volumes 11748 and 11758
We present results of the second round of data from an autonomous driving surrogate sensor pod mounted to the roof rails of a light duty truck. Coupled to a custom built data acquisition system our team drives a series of predetermined routes during inclement winter weather events. Mostly, high rate snowfall events and blowing snow with low visibility. Since our year 1 effort we have augmented our sensor pod with longer range and higher resolution LiDARs and 360 azimuth cameras. Guest LiDARs in this round of testing focus on improving range and longer wavelengths. We will also discuss data processing and lessons learned from the first year of the experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous driving in off-road environments is challenging as it does not have a definite terrain structure. Assessment of terrain traversability is the main factor in deciding the autonomous driving capability of the ground vehicle. Traversability in off-road environments is defined as the drivable track on the trails by different vehicles used in autonomous driving. It is very crucial for the autonomous ground vehicle (AGV) to avoid obstacles such as trees, boulders etc. while traversing through the trails. The goal of this research has three main objectives: a) collection of 2D camera data in the off-road / unstructured environment, b) annotation of 2D camera data depending on the vehicles’ ability to drive through the trails , and c) application of semantic segmentation algorithm on the labeled dataset to predict the trajectory based on the type of ground vehicle. Our models and labeled datasets will be publicly available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous Systems: Security Applications and Issues
This paper addresses one of the main underlying obstacles overlooked when adding autonomy to a cooperative cluster of nodes working together to complete global objectives. The limited bandwidth of the constrained link must be shared between all nodes and becomes a pinch point for distributed autonomy. Intelligently managing the shared information and controlling how it is replicated across all nodes is vital. Lack of communication results in poor team performance. Intermittent connectivity and delayed intelligence sharing leads to the cooperative cluster performing no better or even worse than a single node as the team acts and reacts to misinformation. This paper addresses how we solved the problems encountered with constrained links and distributing information by utilizing application layer routing to perform QoS on a per route basis and creating a unique method of passing generic objects of information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of MUONS: the Michigan tech Unstructured and Off-road Navigation Stack. MUONS is a ROS-based point-cloud-based navigation stack designed to enable traversal of complex terrain that may exceed vehicle kinematic limits. We originally developed MUONS for small to mid-size skid-steer autonomous ground vehicles with no suspension. In this work we examine how MUONS performs on a simulated full-scale vehicle with complex suspension elements. By comparing the performance of the full-scale and mid-size vehicle we aim to identify the critical vehicle linkages that must be included in our simulation model. We also aim to understand the necessary changes and modifications required to adapt MUONS to full-scale Ackerman steering autonomous ground vehicles with complex suspensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021, 117480H https://doi.org/10.1117/12.2585809
Distributed battle management of a group of autonomous agents e.g., unmanned air systems (UAS) facing a highly capable adversary, requires controlling a group of semi-isolated small teams, operating under extensive uncertainty about the enemy situation and also about the status and plans of some of its own agents. Barnstorm Research developed Trust, Refrain and Veto (TReVe), which allows agents to maintain higher situational awareness of teammates when in a denied communications environment. Barnstorm Research has shown that TReVe increases mission success via simulation when onboard RAIDER/FACE-enabled UAS in a denied communications environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous platforms are becoming ubiquitous in society, including UAVs, Roombas, and self-driving cars. With the increase in prevalence of autonomous platforms comes an increase in the threat of attacks against these platforms. These attacks can range from direct hacking to remotely take control of the platforms themselves [1], to attacks involving manipulation or deception such as spoofing or fooling sensor inputs [2, 3]. Ensuring autonomous systems are robust and resilient (R2) against these attacks will become an important challenge to overcome if they are to be trusted and widely adopted. This paper addresses the need to quantitatively define robustness and resilience against manipulation and deceptive attacks which are inherently harder to detect. We define a set of robust estimation metrics that are mathematically rigorous, can be applied to multiple algorithm use cases, and are easy to interpret. Since many of these functions are processed over time, the primary focus will be on process-based metrics. These metrics can be adapted over time by responding and reconfiguring at system runtime. This paper will: 1) provide background information on previous work in this area, including adversarial machine learning, robotics control, and engineering design. 2) Present the metrics and explain how to address our unique problem. 3) Apply these metrics to three different autonomy applications: target tracking, autonomous control, and automatic target recognition. 4) Discuss some additional caveats and potential areas for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general, there is a severe demand for, and shortage of, large accurately labeled datasets to train supervised machine learning (ML) algorithms for domains like smart cars and unmanned aerial systems (UAS). This impacts a number of real-world problems from standing up ML on niche domains to ML performance in/across different environments. Herein, we consider the task of efficiently, meaning requiring the least amount of human intervention possible, converting large UAS data collections over a shared geospatial area into accurately labeled training data. Herein, we take a human-in-the-loop (HITL) approach that is based on coupling active learning and self-supervised learning to efficiently label low altitude UAS imagery for the goal of training ML algorithms for underlying tasks like detection, localization, and tracking. Specifically, we propose an extension to our stream classification algorithm StreamSoNG based on human intervention. We also extend StreamSoNG to rely on a second and initially more mature, but assumed incomplete, ML classifier. Herein, we use the Unreal Engine to simulate realistic ray-traced low altitude UAS data and facilitate algorithmic performance analysis in a controlled fashion. While our results are preliminary, they suggest that this approach is a good trade off between not overloading a human operator and circumventing fundamental stream classification algorithm limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Labsphere has created automated vicarious calibration sites using the SPecular Array Radiometric Calibration (SPARC) mirror technology in the new Field Line-of-sight Automated Radiance Exposure (FLARE) network. A short introduction to FLARE and SPARC will be given showing how arduous field ground calibrations can now be done remotely through FLARE nodes via an internet portal. Preliminary results of the performance of the system’s absolute radiometric and spatial calibration capability were published in 2020, demonstrating validation and uncertainty against current methods of remote calibration and spatial and geometric performance against edge and line targets. This paper will describe FLARE’s impact to ongoing evaluation and maturation of automated analysis processes for all data processing levels for space satellite and UAV imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.