PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8387, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-Robot Control: Joint Session with Conference 8405
The widespread adoption of aerial, ground and sea-borne unmanned systems (UMS) for national security applications
provides many advantages; however, effectively controlling large numbers of UMS in complex environments with
modest manpower is a significant challenge. A control architecture and associated control methods are under
development to allow a single user to control a team of multiple heterogeneous UMS as they conduct multi-faceted (i.e.
multi-objective) missions in real time. The control architecture is hierarchical, modular and layered and enables operator
interaction at each layer, ensuring the human operator is in close control of the unmanned team at all times. The
architecture and key data structures are introduced. Two approaches to distributed collaborative control of heterogeneous
unmanned systems are described, including an extension of homogeneous swarm control and a novel application of
distributed model predictive control. Initial results are presented, demonstrating heterogeneous UMS teams conducting
collaborative missions. Future work will focus on interacting with dynamic targets, integrating alternative control layers,
and enabling a deeper and more intimate level of real-time operator control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the primary challenges facing the modern small-unit tactical team is the ability of the unit to safely and
effectively search, explore, clear and hold urbanized terrain that includes buildings, streets, and subterranean
dwellings. Buildings provide cover and concealment to an enemy and restrict the movement of forces while
diminishing their ability to engage the adversary. The use of robots has significant potential to reduce the risk to
tactical teams and dramatically force multiply the small unit's footprint. Despite advances in robotic mobility, sensing
capabilities, and human-robot interaction, the use of robots in room clearing operations remains nascent.
CHAMP is a software system in development that integrates with a team of robotic platforms to enable them to
coordinate with a human operator performing a search and pursuit task. In this way, the human operator can either give
control to the robots to search autonomously, or can retain control and direct the robots where needed. CHAMP's
autonomy is built upon a combination of adversarial pursuit algorithms and dynamic function allocation strategies that
maximize the team's resources. Multi-modal interaction with CHAMP is achieved using novel gesture-recognition
based capabilities to reduce the need for heads-down tele-operation. The Champ Coordination Algorithm addresses
dynamic and limited team sizes, generates a novel map of the area, and takes into account mission goals, user
preferences and team roles. In this paper we show results from preliminary simulated experiments and find that the
CHAMP system performs faster than traditional search and pursuit algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups
of unmanned ground vehicles (UGVs) that can execute a variety of military relevant missions in dynamic urban
environments. Historically, UGV operations have been primarily performed via tele-operation, requiring at least one
dedicated operator per robot, and requiring substantial real-time bandwidth to accomplish those missions. Our team goal
was to develop a system that can provide long-term value to the war-fighter, utilizing MAGIC-2010 as a stepping stone.
To that end, we self-imposed a set of constraints that would force us to develop technology that could readily be used by
the military in the near term:
• Use a relevant (deployed) platform
• Use low-cost, reliable sensors
• Develop an expandable and modular control system with innovative software algorithms to minimize the
computing footprint required
• Minimize required communications bandwidth and handle communication losses
• Minimize additional power requirements to maximize battery life and mission duration
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-unmanned aerial vehicle (UAV) cooperative communication for visual navigation has recently generated
significant concern. It has large amounts of visual information to be transmitted and processed among UAVs with realtime
requirements. And the UAV clusters have self-organized, time-varying and high dynamic characteristics.
Considering the above conditions, we propose an adaptive information interactive mechanism (AIIM) for multi-UAV
visual navigation. In the mechanism, the function modules for UAV inter-communication interface are designed, the
mobility-based link lifetime is established and the information interactive protocol is presented. Thus we combine the
mobility of UAVs with the corresponding communication requirements to make effective information interaction for
UAVs. Task-oriented distributed control is adopted to improve the collaboration flexibility in the multi-UAV visual
navigation system. In order to timely obtain the necessary visual information, each UAV can cooperate with other
relevant UAVs which meet some certain terms such as situation, task or environmental conditions. Simulation results are
presented to show the validity of the proposed mechanism in terms of end-to-end delay and links stability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the environment in which operation occurs, earch and rescue (SAR) applications present a challenge to
autonomous systems. A control technique for a heterogeneous multi-robot group is discussed. The proposed
methodology is not fully autonomous; however, human operators are freed from most control tasks and allowed to focus
on perception tasks while robots execute a collaborative search and identification plan. Robotic control combines a
centralized dispatch and learning system (which continuously refines heuristics used for planning) with local
autonomous task ordering (based on existing task priority and proximity and local conditions). This technique was tested
in a SAR analogous (from a control perspective) environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camouflaged robots and leave-behind surveillance sensors are desirable in information, surveillance and reconnaissance
operations to minimize the chances of detection by the enemy. Today's camouflaging techniques involve nets and
painted patterns that are fixed in color and geometry, limiting their use to specific environments; a fact illustrated by
numerous changes in military uniforms designed to fit the latest operating environment. Furthermore, nets are bulky and
can interfere with the operation or use of a robot or leave-behind sensor. A more effective technique is to automatically
adapt surface patterns and colors to match the environment, as is done by several species in nature. This can lead to the
development of new and more effective robotic behaviors in surveillance missions and stealth operations. This
biologically-inspired adaptive camouflage can be achieved by a) sampling the environment with a camera, b)
synthesizing a camouflage image, and c) reproducing it on color electronic paper - a thin low-power reflective display -
that is part of the outer enclosure surface of the robot or device. The focus of this paper is on the work performed for the
first two steps of the process. Color-camouflage-synthesis is achieved via modifications made to a gray-level texturesynthesis
method that makes use of gray-level co-occurrence matrices. Statistic equality in color-proportion is achieved
with the use of conditional probability constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient and accurate 3D mapping is desirable in disaster recovery as well as urban warfare situations. The
speed with which these maps can be generated is vital to provide situational awareness in these situations. A
team of mobile robots can work together to build maps more quickly. We present an algorithm by which a team
of mobile robots can merge 2D and 3D measurements to build a 3D map, together with experiments performed
at a military test facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a new framework, Model Transition Control (MTC), that models robot control problems as sets of linear
control regimes linked by nonlinear transitions, and a new learning algorithm, Dynamic Threshold Learning (DTL), that
learns the boundaries of these control regimes in real-time. We demonstrate that DTL can learn to prevent understeer and
oversteer while controlling a simulated high-speed vehicle. We also show that DTL can enable an iRobot PackBot to
avoid rollover in rough terrain and to actively shift its center-of-gravity to maintain balance when climbing obstacles. In
all cases, DTL is able to learn control regime boundaries in a few minutes, often with single-digit numbers of learning
trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the testbed autonomy system, software technologies developed or enhanced, and an overview of the
Enhanced Experiment during the second year of the SOURCE ATO. Over the past year, the Safe Operations of
Unmanned systems for Reconnaissance in Complex Environments (SOURCE) program continued to make
enhancements to LADAR and image based Perception, Intelligence, Control and Tactical Behavior technologies. These
are required for autonomous collaborative unmanned systems. The hardware and software technologies are installed on
a TARDEC developed testbed, the Autonomous Platform Demonstrator (APD).
Ultimately, soldiers will be utilized to conduct safe operation testing scenarios in cluttered dynamic environments using
Autonomous Navigation System (ANS) perception and processing hardware as well as software. Soldier testing will
take place during October 2012 at Camp Lejeune MOUT facility in North Carolina.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of military Unmanned Ground Vehicles (UGV), military units are adapting their concept of operations to
focus on their mission capabilities within populated cities and towns. These types of operations are referred to as MOUT
(Military Operations on Urban Terrain). As more Soldiers seek to incorporate technology to enhance their mission
capabilities, there then becomes a need for UGV systems to encompass an ability to autonomously navigate through
urban terrains. Autonomous systems have the potential to increase Soldier safety by mitigating the risk of unnecessary
enemy exposure during routine urban reconnaissance.
This paper presents the development and methodology that the military has sought to increase mission capabilities by
incorporating autonomy into manned/unmanned ground vehicles. The presented solution that has been developed
through the Safe Operations of Unmanned systems for Reconnaissance in Complex Environments (SOURCE) Army
Technology Objective (ATO) has the ability and has been tested to safely navigate through complex urban environments.
This paper will also focus on the challenges the military has faced to develop the presented autonomous UGV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a hybrid approximate dynamic programming (ADP) method for a hybrid dynamic system
(HDS) optimal control problem, that occurs in many complex unmanned systems which are implemented via a
hybrid architecture, regarding robot modes or the complex environment. The HDS considered in this paper is
characterized by a well-known three-layer hybrid framework, which includes a discrete event controller layer, a
discrete-continuous interface layer, and a continuous state layer. The hybrid optimal control problem (HOCP)
is to nd the optimal discrete event decisions and the optimal continuous controls subject to a deterministic
minimization of a scalar function regarding the system state and control over time. Due to the uncertainty
of environment and complexity of the HOCP, the cost-to-go cannot be evaluated before the HDS explores the
entire system state space; as a result, the optimal control, neither continuous nor discrete, is not available ahead
of time. Therefore, ADP is adopted to learn the optimal control while the HDS is exploring the environment,
because of the online advantage of ADP method. Furthermore, ADP can break the curses of dimensionality
which other optimizing methods, such as dynamic programming (DP) and Markov decision process (MDP), are
facing due to the high dimensions of HOCP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Christopher Brune, Tanarat Dityam, Jonathan Girwar-Nath, Konstantinos Kanistras, Goncalo Martins, Allistair Moses, Ioannis Samonas, Joseph L. St. Amour, Matthew J. Rutherford, et al.
Hardware platforms for unmanned aerial and ground vehicles are becoming increasingly commoditized, leading
to low prices and high-quality equipment. This, in turn, is enabling the use of low-cost unmanned vehicles for
a broadening array of civilian and commercial applications. In this paper we consider a heterogeneous group
consisting of three ground vehicles and two aerial vehicles. Using this standard "team," we describe and analyze
four different civilian applications to which the team is well suited, and for which existing solutions are either
too costly or not effective. The applications are representative of a broad spectrum of applications in the areas
of customs and border protection, infrastructure surveillance, early fire detection, and public safety incident
response. For each application, we describe the overall team function, the application-specific sensor suite, the
data processing and communication requirements, and any ground / operator station requirements. The focus
is on solutions that require collaboration and cooperation between vehicles, and synthesis of the heterogeneous
sensor data they provide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The consensus problem in multi-agent systems often assumes that all agents are equally trustworthy to seek agreement.
But for multi-agent military applications - particularly those that deal with sensor fusion or multi-robot formation
control - this assumption may create the potential for compromised network security or poor cooperative performance.
As such, we present a trust-based solution for the discrete-time multi-agent consensus problem and prove its asymptotic
convergence in strongly connected digraphs. The novelty of the paper is a new trust algorithm called RoboTrust, which
is used to calculate trustworthiness in agents using observations and statistical inferences from various historical
perspectives. The performance of RoboTrust is evaluated within the trust-based consensus protocol under different
conditions of tolerance and confirmation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The last decade has seen a significant increase in intelligent safety devices on private automobiles. These devices
have both increased and augmented the situational awareness of the driver and in some cases provided automated
vehicle responses. To date almost all intelligent safety devices have relied on data directly perceived by the vehicle.
This constraint has a direct impact on the types of solutions available to the vehicle. In an effort to improve the
safety options available to a vehicle, numerous research laboratories and government agencies are investing time
and resources into connecting vehicles to each other and to infrastructure-based devices.
This work details several efforts in both the commercial vehicle and the private auto industries to increase vehicle
safety and driver situational awareness through vehicle-to-vehicle and vehicle-to-infrastructure communication. It
will specifically discuss intelligent behaviors being designed to automatically disable non-compliant vehicles, warn
tractor trailer vehicles of unsafe lane maneuvers such as lane changes, passing, and merging, and alert drivers to
non-line-of-sight emergencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a navigation problem in a distributed, self-organized and coordinate-free Wireless Sensor and Ac-
tuator Network (WSAN). We rst present navigation algorithms that are veried using simulation results. Con-
sidering more than one destination and multiple mobile Unmanned Ground Vehicles (UGVs), we introduce a
distributed solution to the Multi-UGV, Multi-Destination navigation problem. The objective of the solution to
this problem is to eciently allocate UGVs to dierent destinations and carry out navigation in the network en-
vironment that minimizes total travel distance. The main contribution of this paper is to develop a solution that
does not attempt to localize either the UGVs or the sensor and actuator nodes. Other than some connectivity as-
sumptions about the communication graph, we consider that no prior information about the WSAN is available.
The solution presented here is distributed, and the UGV navigation is solely based on feedback from neigh-
boring sensor and actuator nodes. One special case discussed in the paper, the Single-UGV, Multi-Destination
navigation problem, is essentially equivalent to the well-known and dicult Traveling Salesman Problem (TSP).
Simulation results are presented that illustrate the navigation distance traveled through the network.
We also introduce an experimental testbed for the realization of coordinate-free and localization-free UGV
navigation. We use the Cricket platform as the sensor and actuator network and a Pioneer 3-DX robot as the
UGV. The experiments illustrate the UGV navigation in a coordinate-free WSAN environment where the UGV
successfully arrives at the assigned destinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned Aerial Vehicles (UAVs) are versatile aircraft with many applications, including the potential for use to detect
unintended electromagnetic emissions from electronic devices. A particular area of recent interest has been helicopter
unmanned aerial vehicles. Because of the nature of these helicopters' dynamics, high-performance controller design for
them presents a challenge. This paper introduces an optimal controller design via output feedback control for trajectory
tracking of a helicopter UAV using a neural network (NN). The output-feedback control system utilizes the backstepping
methodology, employing kinematic, virtual, and dynamic controllers and an observer. Optimal tracking is accomplished
with a single NN utilized for cost function approximation. The controller positions the helicopter, which is equipped
with an antenna, such that the antenna can detect unintended emissions. The overall closed-loop system stability with the
proposed controller is demonstrated by using Lyapunov analysis. Finally, results are provided to demonstrate the
effectiveness of the proposed control design for positioning the helicopter for unintended emissions detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precedent has shown common controllers must strike a balance between the desire for an integrated user interface design
by human factors engineers and support of project-specific data requirements. A common user-interface requires the
project-specific data to conform to an internal representation, but project-specific customization is impeded by the
implicit rules introduced by the internal data representation. Space and Naval Warfare Systems Center Pacific (SSC
Pacific) developed the latest version of the Multi-robot Operator Control Unit (MOCU) to address interoperability,
standardization, and customization issues by using a modular, extensible, and flexible architecture built upon a sharedworld
model. MOCU version 3 provides an open and extensible operator-control interface that allows additional
functionality to be seamlessly added with software modules while providing the means to fully integrate the information
into a layered game-like user interface. MOCU's design allows it to completely decouple the human interface from the
core management modules, while still enabling modules to render overlapping regions of the screen without interference
or a priori knowledge of other display elements, thus allowing more flexibility in project-specific customization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current generation UGV control systems typically require operators to physically control a platform through
teleoperation, even for simple tasks such as travelling from one location to another. While vision-based control
technologies promise to significantly reduce the burden on UGV operators, most schemes rely on specialized sensing
hardware, such as LIDAR or stereo cameras, or require additional operator-worn equipment or markers to differentiate
the leader from nearby pedestrians. We present a system for robust leader-follower control of small UGVs using only a
single monocular camera, which is ubiquitous on mobile platforms. The system allows a user to control a mobile robot
by leading the way and issuing commands through arm/hand gestures, and differentiates between the leader and nearby
pedestrians. The software achieves this by integrating efficient algorithms for pedestrian detection, online appearance
learning, and kinematic tracking with a lightweight technique for camera-based gesture recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous
in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely
avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must
also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting
camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator
performance by seamlessly coordinating human and controller commands is presented. This method uses onboard
LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment,
and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains
within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an
open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the
presence of "routine" operator error, loss of operator attention, and complete loss of communications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to
relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other
monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the
robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally
disturbing the target or nearby objects.
We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to
conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues
improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect
that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used
including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono
(two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV
teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks
derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint
Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As more Soldiers seek to utilize robots to enhance their mission capabilities, controls are needed which are intuitive,
portable, and adaptable to a wide range of mission tasks. Android™ and iOS™ devices have the potential to meet each
of these requirements as well as being based on readily available hardware. This paper will focus on some of the ways in
which an Android™ or iOS™ device could be used to control specific and varied robot mobility functions and payload
tools. Several small unmanned ground vehicle (SUGV) payload tools will have been investigated at Camp Pendleton
during a user assessment and mission feasibility study for automatic remote tool changing. This group of payload tools
will provide a basis, to researchers, concerning what types of control functions are needed to fully utilize SUGV robotic
capabilities. Additional, mobility functions using tablet devices have been used as part of the Safe Operation of Unmanned
systems for Reconnaissance in Complex Environments Army Technology Objective (SOURCE ATO) which is
investigating the safe operation of robotics.
Using Android™ and iOS™ hand-held devices is not a new concept in robot manipulation. However, the authors of this
paper hope to introduce some novel concepts that may serve to make the interaction between Soldier and machine more
fluid and intuitive. By creating a better user experience, Android™ and iOS™ devices could help to reduce training time,
enhance performance, and increase acceptance of robotics as valuable mission tools for Soldiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous
nature for source localization tasks. The system combines augmented reality interfaces capabilities with
human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator
to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning
algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the
environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed
sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused
AR view is displayed which helped the users pin point source information or help the operator with the goals of the
mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions
are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go
and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection
missions. Usability tests and operator workload analysis are also investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an analysis of large scale decentralized SLAM under a variety of experimental conditions
to illustrate design trade-offs relevant to multi-robot mapping in challenging environments. As a part of work
through the MAST CTA, the focus of these robot teams is on the use of small-scale robots with limited sensing,
communication and computational resources. To evaluate mapping algorithms with large numbers (50+) of
robots, we developed a simulation incorporating sensing of unlabeled landmarks, line-of-sight blocking obstacles,
and communication modeling. Scenarios are randomly generated with variable models for sensing, communication,
and robot behavior.
The underlying Decentralized Data Fusion (DDF) algorithm in these experiments enables robots to construct a
map of their surroundings by fusing local sensor measurements with condensed map information from neighboring
robots. Each robot maintains a cache of previously collected condensed maps from neighboring robots, and
actively distributes these maps throughout the network to ensure resilience to communication and node failures.
We bound the size of the robot neighborhoods to control the growth of the size of neighborhood maps.
We present the results of experiments conducted in these simulated scenarios under varying measurement
models and conditions while measuring mapping performance. We discuss the trade-offs between mapping
performance and scenario design, including robot teams separating and joining, multi-robot data association,
exploration bounding, and neighborhood sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key component in the emerging localization and mapping paradigm is an appearance-based place recognition
algorithm that detects when a place has been revisited. This algorithm can run in the background at a low
frame rate and be used to signal a global geometric mapping algorithm when a loop is detected. An optimization
technique can then be used to correct the map by 'closing the loop'. This allows an autonomous unmanned ground
vehicle to improve localization and map accuracy and successfully navigate large environments. Image-based
place recognition techniques lack robustness to sensor orientation and varying lighting conditions. Additionally,
the quality of range estimates from monocular or stereo imagery can decrease the loop closure accuracy. Here,
we present a lidar-based place recognition system that is robust to these challenges. This probabilistic framework
learns a generative model of place appearance and determines whether a new observation comes from a new or
previously seen place. Highly descriptive features called the Variable Dimensional Local Shape Descriptors are
extracted from lidar range data to encode environment features. The range data processing has been implemented
on a graphics processing unit to optimize performance. The system runs in real-time on a military research
vehicle equipped with a highly accurate, 360 degree field of view lidar and can detect loops regardless of the
sensor orientation. Promising experimental results are presented for both rural and urban scenes in large outdoor
environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance
in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces
that provide high vantage points without the help of any external sensor and with a fully contained on-board software
solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to
navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no
special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure,
the landing platform detection system uses a planar homography decomposition to detect landing targets and produce
approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose
estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and
to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs
from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial
vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is
able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A prototype, wide-field, optical sense-and-avoid instrument was constructed from low-cost commercial off-the-shelf
components, and configured as a network of smart camera nodes. To detect small, general-aviation aircraft
in a timely manner, such a sensor must detect targets at a range of 5-10 km at an update rate of a few
Hz. This paper evaluates the
flight test performance of the "DragonflEYE" sensor as installed on a Bell 205
helicopter. Both the Bell 205 and the Bell 206 (intruder aircraft) were fully instrumented to record position and
orientation. Emphasis was given to the critical case of head-on collisions at typical general aviation altitudes and
airspeeds. Imagery from the DragonflEYE was stored for the offline assessment of performance. Methodologies
for assessing the key figures of merit, such as the signal-to-noise ratio, the range at first detection (R0) and
angular target size were developed. Preliminary analysis indicated an airborne detection range of 6:7 km under
typical visual meteorological conditions, which significantly exceeded typical visual acquisition ranges under the
same conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Towards the goal of fast, vision-based autonomous flight, localization, and map building to support local planning and
control in unstructured outdoor environments, we present a method for incrementally building a map of salient tree trunks
while simultaneously estimating the trajectory of a quadrotor flying through a forest. We make significant progress in
a class of visual perception methods that produce low-dimensional, geometric information that is ideal for planning and
navigation on aerial robots, while directing computational resources using motion saliency, which selects objects that are
important to navigation and planning. By low-dimensional geometric information, we mean coarse geometric primitives,
which for the purposes of motion planning and navigation are suitable proxies for real-world objects. Additionally, we
develop a method for summarizing past image measurements that avoids expensive computations on a history of images
while maintaining the key non-linearities that make full map and trajectory smoothing possible. We demonstrate results
with data from a small, commercially-available quad-rotor flying in a challenging, forested environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed multiple object detection and classification methods using a multi-channel light detection
and ranging (LIDAR) device for an unmanned ground vehicle (UGV) in dense forest. The natural terrain environment is
described by trees and bushes which are representative wild objects. To describe them on a navigation map, we must
detect and classify objects with a single LIDAR sensor rather than multiple sensors on an UGV. It has the advantage of
price, communication, and synchronization. If scanning object data are similar with real data, it is better recognizable; a
large amount of 3D point cloud data can resemble object in appearance. There is a trade-off relationship between point
cloud data and acquisition time. Getting a number of items of point cloud data gradually increases with measurement
time and computation time.
Our approach for achieving multiple object detection and classification consists of three steps that start from the raw
3D point cloud data. The 3D point cloud data are composed of one frame by one scan. First, the point cloud data are
divided by two groups, ground points and non-ground points. Second, the non-ground points are separated by vegetation
points and object points. Finally, the object points are arranged as a set of feasible objects with a clustering method. In
our approach, random sample consensus (RANSAC), support vector machine (SVM) with principle component analysis
(PCA) and fuzzy c-mean (FCM) algorithms belong to each step to derive multiple objects detection and classification.
In a local statistic approach, the density of a point cloud in the processing area influences the system performance. Our
primary research goal is to achieve improved performance with a data set of one frame per scan for an UGV. Thus, we
evaluate the proposed object detection and classification algorithm with a single scan frame which is acquired from three
different LIDAR devices. The device has different scan channels. The 1-channel and 4-channel LIDAR devices are
commercial devices, LMS-111 and LD-MRS. The 8-channel device, KIDAR-B25, is developed from our previous
research with the collaboration of a Korean company. We verify our proposed method through three results: the object
detection rate, the point classification rate and the computing time. The results of the method are analyzed with hand-labeled
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned systems have become a critical element of the Army's Force Structure for applications such as Emergency
Ordnance Disposal (EOD). Systems currently fielded are typically tele-operated and, thus, impose significant cognitive
burden upon the operator. The Robotics CTA (RCTA), a collaborative research endeavor between the Army Research
Laboratory and a consortium of eight industrial and academic partners, is developing fundamental technology to enable a
new level of autonomous capability for future unmanned systems that can act as teammates to Soldiers making up a
small unit. The Alliance is focusing research in five key areas: a cognitively based world model, semantic perception,
learning, meta-cognition, and adaptive behaviors. Because current world model representations are relatively shallow,
metrically based, and support only brittle behaviors, the RCTA is creating a cognitive-to-metric world model that can
incorporate and utilize mission context. Current perceptual capabilities for unmanned systems are generally limited to a
small number of well defined objects or behaviors. The RCTA is raising perception to a semantic level that enables
understanding of relationships among objects and behaviors. To successfully team with small units, the command and
control of unmanned systems must move away from the current hardware controller paradigm to one of verbal and
gestural communication, implicit cues, and transparency of action between Soldier and robot. The RCTA is also
exploring adaptive behavior and mechanics that will permit manipulation of arbitrarily shaped objects, animal-like
mobility in complex environments, and conduct of military missions in dynamic tactical conditions. Efforts to
incorporate learning from the lowest levels of the architecture upwards are key to each of the above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The creation of high degree of freedom dynamic mobile manipulation techniques and behaviors will allow robots to
accomplish difficult tasks in the field. We are investigating the use of the body and legs of legged robots to improve the
strength, velocity, and workspace of an integrated manipulator to accomplish dynamic manipulation. This is an
especially challenging task, as all of the degrees of freedom are active at all times, the dynamic forces generated are
high, and the legged system must maintain robust balance throughout the duration of the tasks. To accomplish this goal,
we are utilizing trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped
robot (BigDog) by planning the trajectories in a 13 dimensional space. Covariance Matrix Adaptation techniques are
utilized to optimize for several criteria such as payload capability and task completion speed while also obeying
constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop
behaviors are then used to generate feed-forward terms, which are subsequently used online to improve tracking and
maintain low controller gains. Some initial results on one of our existing balancing quadruped robots with an additional
human-arm-like manipulator are demonstrated on robot hardware, including dynamic lifting and throwing of heavy
objects 16.5kg cinder blocks, using motions that resemble a human athlete more than typical robotic motions. Increased
payload capacity is accomplished through coordinated body motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For mobile robots, the essential units of actuation, computation, and sensing must be designed to fit within
the body of the robot. Additional capabilities will largely depend upon a given activity, and should be easily
reconfigurable to maximize the diversity of applications and experiments. To address this issue, we introduce a
modular architecture originally developed and tested in the design and implementation of the X-RHex hexapod
that allows the robot to operate as a mobile laboratory on legs. In the present paper we will introduce the
specification, design and very earliest operational data of Canid, an actively driven compliant-spined quadruped
whose completely different morphology and intended dynamical operating point are nevertheless built around
exactly the same "Lab on Legs" actuation, computation, and sensing infrastructure. We will review as well,
more briefly a second RHex variation, the XRL platform, built using the same components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an integrated architecture in which perception and cognition interact and provide information to each other
leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection
algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a
checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The
Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a
cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks
ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The
Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their
locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to
track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also
provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to
eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in
developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We
trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single
and multiple behavior sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semantic perception involves naming objects and features in the scene, understanding the relations between them, and
understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central
component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical
behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's
environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we
summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of
perception systems for UGVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate
among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology
Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments.
Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal
trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary
cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the
predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary.
We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths.
Experiments in simulated environments show the performance of the algorithms under different controlled conditions,
and experiments on physical mobile robots interacting with humans show how the algorithms perform under the
uncertainties of the real world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current ground robots are largely employed via tele-operation and provide their operators with useful tools to extend
reach, improve sensing, and avoid dangers. To move from robots that are useful as tools to truly synergistic human-robot
teaming, however, will require not only greater technical capabilities among robots, but also a better understanding of the
ways in which the principles of teamwork can be applied from exclusively human teams to mixed teams of humans and
robots. In this respect, a core characteristic that enables successful human teams to coordinate shared tasks is their ability
to create, maintain, and act on a shared understanding of the world and the roles of the team and its members in it. The
team performance literature clearly points towards two important cornerstones for shared understanding of team
members: mental models and situation awareness. These constructs have been investigated as products of teams as well;
amongst teams, they are shared mental models and shared situation awareness. Consequently, we are studying how these
two constructs can be measured and instantiated in human-robot teams. In this paper, we report results from three related
efforts that are investigating process and performance outcomes for human robot teams. Our investigations include: (a)
how human mental models of tasks and teams change whether a teammate is human, a service animal, or an advanced
automated system; (b) how computer modeling can lead to mental models being instantiated and used in robots; (c) how
we can simulate the interactions between human and future robotic teammates on the basis of changes in shared mental
models and situation assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a
replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the
Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot
systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris
have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies
done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of
complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for
reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the
domestic homeland security mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autonomous Urban Reconnaissance Ingress System (AURIS™) addresses a significant limitation of current
military and first responder robotics technology: the inability of reconnaissance robots to open doors. Leveraging
user testing as a baseline, the program has derived specifications necessary for military personnel to open doors
with fielded UGVs (Unmanned Ground Vehicles), and evaluates the technology's impact on operational mission
areas: duration, timing, and user patience in developing a tactically relevant, safe, and effective system. Funding is
provided through the US ARMY Tank Automotive Research, Development and Engineering Center (TARDEC) and
the project represents a leap forward in perception, autonomy, robotic implements, and coordinated payload
operation in UGVs. This paper describes high level details of specification generation, status of the last phase of
development, an advanced view of the system autonomy capability, and a short look ahead towards the ongoing
work on this compelling and important technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
State-of-the-art robotic explosive ordnance disposal robotics have not, in general, adopted recent advances in control technology and man-machine interfaces and lag many years behind academia. This paper describes the Haptics-based Immersive Telerobotic System project investigating an immersive telepresence envrionment incorporating advanced vehicle control systems, Augmented immersive sensory feedback, dynamic 3D visual information, and haptic feedback for explosive ordnance disposal operators. The project aim is to provide operatiors a more sophisticated interface and expand sensory input to perform complex tasks to defeat improvised explosive devices successfully. The introduction of haptics and immersive teleprescence has the potential to shift the way teleprescence systems work for explosive ordnance disposal tasks or more widely for first responders scenarios involving remote unmanned ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We model the scenario between a robotic system and its operating environment as a
strategic game between two players. The problem will be formulated as a game of timing. We will
treat disturbances in a worst case scenario, i.e., as if they were placed by an opponent acting
optimally. Game theory is a formal way to analyze the interactions among a group of rational
players who behave strategically. We believe that behavior in the presence of disturbances using
games of timing will reduce to optimal control when the disturbance is suppressed. In this paper
we create a model of phase space similar to the dolichobrachistochrone problem. We discretize
phase space to a simple grid where Player P is trying to reach a goal as fast as possible, i.e., with
minimum cost. Player E is trying to maximize this cost. To do this, E has a limited number of
"chips" to distribute on the grid. How should E distribute his resources and how should P
navigate the grid? Rather than treating disturbances as a random occurrence, we seek to treat
them as an optimal strategy
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned ground vehicles (UGVs) are well-suited to a variety of tasks that are dangerous or repetitive for
humans to perform. Despite recent advances, UGVs still suffer from reliability issues, and human operation
failures have been identified as one root cause of UGV system failure. However, most literature relevant to UGV
reliability does not address the effects of human errors or the user interface. Our previous work investigated
the issue of user situational awareness and sense of presence in the robot workspace by implementing a Mixed
Reality interface featuring a first-person video feed with an Augmented Reality overlay and a third-person Virtual
Reality display. The interface was evaluated in a series of user tests in which users manually controlled a UGV
with a manipulator arm using traditional input modalities including a computer mouse, keyboard and gamepad.
In this study, we learned that users found it challenging to mentally map commands from the manual inputs to
the robot arm behavior. Also, switching between control modalities seemed to add to the cognitive load during
teleoperation tasks. A master-slave style manual controller can provide an intuitive one-to-one mapping from user
input to robot pose, and has the potential to improve both operator situational awareness for teleoperation tasks
and decrease mission completion time. This paper describes the design and implementation of a teleoperated
UGV with a Mixed Reality visualization interface and a master-slave controller that is suitable for teleoperated
mobile manipulation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary
complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a
typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal
structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and
intuitive representation of the environment for the remote operator. We have integrated a robust mapping and
exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition
algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the
interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the Urban Environment Exploration project, the Space and Naval Warfare Systems Center Pacic (SSC-
PAC) is maturing technologies and sensor payloads that enable man-portable robots to operate autonomously
within the challenging conditions of urban environments. Previously, SSC-PAC has demonstrated robotic capabilities to navigate and localize without GPS and map the ground
oors of various building sizes.1 SSC-PAC has
since extended those capabilities to localize and map multiple multi-story buildings within a specied area. To
facilitate these capabilities, SSC-PAC developed technologies that enable the robot to detect stairs/stairwells,
maintain localization across multiple environments (e.g. in a 3D world, on stairs, with/without GPS), visualize
data in 3D, plan paths between any two points within the specied area, and avoid 3D obstacles. These technologies have been developed as independent behaviors under the Autonomous Capabilities Suite, a behavior
architecture, and demonstrated at a MOUT site at Camp Pendleton. This paper describes the perceptions and
behaviors used to produce these capabilities, as well as an example demonstration scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile sensor nodes are an ideal solution for efficiently collecting measurements for a variety of applications.
Mobile sensor nodes offer a particular advantage when measurements must be made in hazardous and/or adversarial
environments. When mobile sensor nodes must operate in hostile environments, it would be advantageous for them to
be able to avoid undesired interactions with hostile elements. It is also of interest for the mobile sensor node to maintain
low-observability in order to avoid detection by hostile elements. Conventional path-planning strategies typically
attempt to plan a path by optimizing some performance metric. The problem with this approach in an adversarial
environment is that it may be relatively simple for a hostile element to anticipate the mobile sensor node's actions (i.e.
optimal paths are also often predictable paths). Such information could then be leveraged to exploit the mobile sensor
node. Furthermore, dynamic adversarial environments are typically characterized by high-uncertainty and highcomplexity
that can make synthesizing paths featuring adequate performance very difficult. The goal of this work is to
develop a path-planner anchored in info-gap decision theory, capable of generating non-deterministic paths that satisfy
predetermined performance requirements in the face of uncertainty surrounding the actions of the hostile element(s)
and/or the environment. This type of path-planner will inherently make use of the time-tested security technique of
varying paths and changing routines while taking into account the current state estimate of the environment and the
uncertainties associated with it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proper management of the available on-board battery power is critical for reliable UGV operations. In this paper,
we will focus on the task of area coverage - in which a UGV is required to move through an area and travel
within a certain distance of each point - with limited available energy. We compare coverage paths generated by
existing methods and generate optimal trajectories by using a novel cost function. Using an iRobot Packbot, we
present results showing dierences in energy usage while following these trajectories.
We also compare the energy usage of the Packbot while traveling at a dierent velocities. Our results show
that it is more ecient, when traveling at a constant velocity, to travel at a faster velocity by computing the
ratio of the energy used to distance traveled.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From transporting troops and weapons systems to supplying beans, bullets, and Band-Aids to front-line warfighters,
tactical wheeled vehicles serve as the materiel backbone anywhere there are boots on the ground. Drawing from the
U.S. Army's Tactical Wheeled Vehicle Strategy and the Marine Corps Vision & Strategy 2025 reports, one may
conclude that the services have modest expectations for the introduction of large unmanned ground systems into
operational roles in the next 15 years. However, the Department of Defense has already invested considerably in the
research and development of full-size UGVs-and commanders deployed in both Iraq and Afghanistan have advocated
the urgent fielding of early incarnations of this technology, believing it could make a difference on their battlefields
today.
For military UGVs to evolve from mere tactical advantages into strategic assets with developed doctrine, they must
become as trustworthy as a well-trained warfighter in performing their assigned task. Starting with the Marine Corps'
ongoing Cargo Unmanned Ground Vehicle program as a baseline, and informed by feedback from previously deployed
subject matter experts, this paper examines the gaps which presently exist in UGVs from a mission-capable perspective.
It then considers viable near-term technical solutions to meet today's functional requirements, as well as long-term
development strategies to enable truly robust performance. With future conflicts expected to be characterized by
increasingly complex operational environments and a broad spectrum of rapidly adapting threats, one of the largest
challenges for unmanned ground systems will be the ability to exhibit agility in unpredictable circumstances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of Unmanned Ground Vehicles (UGV's) in the Military's operations is continually increasing. All
Military branches now rely on advanced robotic technologies to aid in their missions' operations. The integration of
these technologies has not only enhanced capabilities, but has increased personnel safety by generating larger standoff
distances. Currently most UGV's are deployed by an exposed dismounted Warfighter because the Military possess a
limited capability to do so remotely and can only deploy a single UGV.
This paper explains the conceptual development of a novel approach to remotely deploy and extract multiple robots from
a single host platform. The Robotic Deployment & Extraction System (ROBODEXS) is a result of our development
research to improve marsupial robotic deployment at safe standoff distances. The presented solution is modular and
scalable, having the ability to deploy anywhere from two to twenty robots from a single deployment mechanism. For
larger carrier platforms, multiple sets of ROBODEXS modules may be integrated for deployment and extraction of even
greater numbers of robots. Such a system allows mass deployment and extraction from a single manned/unmanned
vehicle, which is not currently possible with other deployment systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mesh networks for robot teleoperation pose different challenges than those associated with traditional mesh networks.
Unmanned ground vehicles (UGVs) are mobile and operate in constantly changing and uncontrollable environments.
Building a mesh network to work well under these harsh conditions presents a unique challenge. The Manually
Deployed Communication Relay (MDCR) mesh networking system extends the range of and provides non-line-of-sight
(NLOS) communications for tactical and explosive ordnance disposal (EOD) robots currently in theater. It supports
multiple mesh nodes, robots acting as nodes, and works with all Internet Protocol (IP)-based robotic systems. Under
MDCR, the performance of different routing protocols and route selection metrics were compared resulting in a
modified version of the Babel mesh networking protocol. This paper discusses this and other topics encountered during
development and testing of the MDCR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A crucially important aspect for mission-critical robotic operations is ensuring as best as possible that an
autonomous system be able to complete its task. In a project for the Defense Threat Reduction Agency (DTRA) we
are developing methods to provide such guidance, specifically for counter-Weapons of Mass Destruction (C-WMD)
missions. In this paper, we describe the scenarios under consideration, the performance measures and metrics being
developed, and an outline of the mechanisms for providing performance guarantees.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Navy conducts thousands of Maritime Interdiction Operations (MIOs) every year around the globe. Navy
Visit, Board, Search, and Seizure (VBSS) teams regularly board suspect ships and perform search operations, often in
hostile environments. There is a need for a small tactical robot that can be deployed ahead of the team to provide
enhanced situational awareness in these boarding, breaching, and clearing operations. Space and Naval Warfare Systems
Center Pacific (SSC Pacific) performed a market survey, identified and obtained a number of throwable robots that may
be useful in these situations, and conducted user evaluations with Navy VBSS team members, taking each of these
robots through all applicable steps of the VBSS operation in realistic training environments. From these tests, we
verified the requirements and defined the key performance parameters for an MIO robot. This paper describes the tests
conducted and the identified characteristics of this robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is presented for determining the position and orientation of a vehicle from a single, color video taken
from the hood of the vehicle, for the purpose of assisting its autonomous operation at very high speeds on
rural roads. An implicit perspective transformation allows estimation of the vehicle's orientation and cross-road
image features. From these, an adaptive road model is built and the horizontal position of the vehicle can be
estimated. This method makes very few assumptions about the structure of the road or the path of the vehicle.
In a realistic, simulated environment, good road model construction and vehicle position estimation are achieved
at frame rates suitable for real-time high speed driving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis
accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense
forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical
implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by
fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments:
darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle.
We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser
scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis
Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of
Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser
scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range
Finder (LRF).
The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We
confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter)
framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera
to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation
framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features'
3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks,
the proposed method discards feature estimates from the extended state vector once they are no longer observed
for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for
online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the
EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to
produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford
campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the
estimation error is ~1.9% of the path length.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a statistical approach for analyzing the performance of uncertain systems.
By treating the uncertain parameters of systems as random variables, we formulate a wide class of
performance analysis problems as a general problem of quantifying the deviation of a random variable
from its mean value. New concentration inequalities are developed to make such quantification rigorous
and analytically simple. Application examples are given for demonstrating the power of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-robot systems may be capable of performing a wide variety of distributed, hazardous, and complex tasks that
are difficult or impossible for independent robots. Despite significant research accomplishments and potential
benefits for defense, space, and other application areas, much of the technology is still in early development. This
paper reviews influential taxonomy and trends in multi-robotics. A condensed model of multi-robot interaction is
presented. Potential near-term defense and space applications are categorized into two mission types: distributed and
complex. Conclusions are drawn concerning research opportunities toward multi-robot systems for the defense and
space domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a videometric method and system to implement terminal guidance for Unmanned Aerial Vehicle(UAV)
accurate landing. In the videometric system, two calibrated cameras attached to the ground are used, and a calibration
method in which at least 5 control points are applied is developed to calibrate the inner and exterior parameters of the
cameras. Cameras with 850nm spectral filter are used to recognize a 850nm LED target fixed on the UAV which can
highlight itself in images with complicated background. NNLOG (normalized negative laplacian of gaussian) operator is
developed for automatic target detection and tracking. Finally, 3-D position of the UAV with high accuracy can be
calculated and transfered to control system to direct UAV accurate landing. The videometric system can work in the rate
of 50Hz. Many real flight and static accuracy experiments demonstrate the correctness and veracity of the method
proposed in this paper, and they also indicate the reliability and robustness of the system proposed in this paper. The
static accuracy experiment results show that the deviation is less-than 10cm when target is far from the cameras and lessthan
2cm in 100m region. The real flight experiment results show that the deviation from DGPS is less-than 20cm. The
system implement in this paper won the first prize in the AVIC Cup-International UAV Innovation Grand Prix, and it is
the only one that achieved UAV accurate landing without GPS or DGPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.