Coupling (semi-)autonomous drones with on-ground personnel can help improve metrics like mission safety, task effectiveness, and task completion time. However, in order for a drone to be an effective companion, it needs to be able to make intelligent decisions about what to do in a partially observable and dynamic environment in light of uncertainty and multiple competing criteria. One simple example is where and how to move. These kinds of continuous or waypoint-based decisions vary greatly from task to task, such as in the scenario of building a 3D map of an area, getting a minimum number of pixels on objects for automatic target detection, exploring an area around a search team, etc. While it is possible to implement each behavior from scratch, we discuss a flexible and extensible framework that allows the specification of dynamic, controlled, and explainable behaviors based on the multi-criteria decision making (MCDM), an aggregation task, of different UFOMap voxel map layers. While we currently employ specific layers such as drone position, time since a voxel was last observed, minimum distance to a voxel, and exploration fringe, future additional layers present the opportunity for the creation of more complex and novel behaviors. Through testing with simulated flights, we have demonstrated that such an approach is feasible for the construction of useful semi-autonomous behaviors in the pursuit of human-robot teaming.
In the physical universe, truth for computer vision (CV) is impractical if not impossible to obtain. As a result, the CV community has resorted to qualitative practices and sub-optimal quantitative measures. This is problematic because it limits our ability to train, evaluate, and ultimately understand algorithms such as single image depth estimation (SIDE) and structure from motion (SfM). How good are these algorithms, individually and relatively, and where do they break? Herein, we discuss that while truth evades both the real and simulated (SIM) universes, a SIM CV gold-standard can be achieved. We outline an extensible SIM framework and data collection workflow using unreal engine with the Robot Operating System (ROS) for three-dimensional mapping on low altitude aerial vehicles. Furthermore, voxel-based mapping measures from algorithm output to a SIM gold-standard are discussed. The proposed metrics are demonstrated by analyzing performance across changes in platform context. Ultimately, the current article is a step towards an improved process for comparing algorithms, evaluating their strengths and weaknesses, and automating algorithm design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.