PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This Pdf file contains the Front Matter associated with SPIE Proceedings Volume 7692, including Title page, Copyright information, Table of contents, Conference Committee listing, and Introduction (if any).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-Organizing, Collaborative, and Unmanned ISR Robots: Joint Session with Conference 7707
For decades, military and other national security agencies have been denied unfettered access to the National Air
Space (NAS) because their unmanned aircraft lack a highly reliable and effective collision avoidance capability.
The controlling agency, the Federal Aviation Administration, justifiably demands "no harm" to the safety of the
NAS. To overcome the constraints imposed on Unmanned Aircraft Systems (UAS) use of the NAS, a new, complex,
conformable collision avoidance system has been developed - one that will be effective in all flyable weather
conditions, overcoming the shortfalls of other sensing systems, including radar, lidar, acoustic, EO/IR, etc., while
meeting form factor and cost criteria suitable for Tier II UAS operations. The system also targets Tier I as an
ultimate goal, understanding the operational limitations of the smallest UASs may require modification of the design
that is suitable for Tier II and higher. The All Weather Sense and Avoid System (AWSAS) takes into account the
FAA's plan to incorporate ADS-B (out) for all aircraft by 2020, and it is intended to make collision avoidance
capability available for UAS entry into the NAS as early as 2013. When approved, UASs can fly mission or training
flights in the NAS free of the constraints presently in place. Upon implementation this system will achieve collision
avoidance capability for UASs deployed for national security purposes and will allow expansion of UAS usage for
commercial or other civil purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle
systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects
such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D
point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show
qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned air and ground vehicles are an integral part of military operations. However, the use of the robot goes beyond
moving the platform from point A to point B. The operator who is responsible for the robots will have a multitude of
tasks to complete; route planning for the robot, monitoring the robot during the mission, monitoring and interpreting the
sensor information received by the robot, and communicating that information with others. As a result, the addition of
robotics can be considered a burden on the operator if not integrated appropriately into the system. The goal of the US
Army Research Laboratory's Human Robotic Interaction (HRI) Program is to enable the Soldier to use robotic systems
in a way that increases performance, that is, to facilitate effective collaboration between unmanned systems and the
Soldier. The program uses multiple research approaches; modeling, simulation, laboratory experimentation, and field
experimentation to achieve this overall goal. We have basic and applied research in HRI to include supervisory control,
mounted and dismounted robotic control, and mitigation strategies for the HRI environment. This paper describes our
HRI program across these various domains and how our research is supporting both current and future military
operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Being able to understand and carry out spoken natural instructions even in limited domains is extremely challenging
for current robots. The difficulties are multifarious, ranging from problems with speech recognizers to
difficulties with parsing disfluent speech or resolving references based on perceptual or task-based knowledge.
In this paper, we present our efforts at starting to address these problems with an integrated natural language
understanding system implemented in our DIARC architecture on a robot that can handle fairly unconstrained
spoken ungrammatical and incomplete instructions reliably in a limited domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Those applying autonomous technologies to military systems strive to enhance human-robot and robot-robot
performance. Beyond performance, the military must be concerned with local area security. Characterized as "secure
mobility", military systems must enable safe and effective terrain traversal concurrent with maintenance of situational
awareness (SA). One approach to interleaving these objectives is supervisory control, with popular options being shared
and traded control. Yet, with the scale and expense of military assets, common technical issues such as transition time
and safeguarding become critical; especially as they interact with Soldier capabilities. Study is required to enable
selection of control methods that optimize Soldier-system performance while safeguarding both individually. The current
report describes a study utilizing experimental military vehicles and simulation systems enabling teleoperation and
supervisory control. Automated triggering of SA demands was interspersed with a set of challenging driving maneuvers
in a 'teleoperation-like' context to examine the influence of supervisory control on Soldier-system performance. Results
indicated that direct application of supervisory control, while beneficial under particular demands, requires continued
development to be perceived by Soldiers as useful. Future efforts should more tightly couple the information exchanged
between the Soldier and system to overcome current challenges not addressed by standard control methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation is the currently accepted method of control of military unmanned ground vehicles (UGVs) in the field.
Degraded communications affects the operator's tasks of driving, navigating and maintaining UGV situation awareness.
A potential approach to address this challenge is to provide the UGV with local autonomy to generate driving commands
(translation and rotation rates). This paper describes an experiment and preliminary results comparing "point-and-go"
supervisory control in which the operator designates a goal point on the 2D driving display to teleoperation as a function
of communications degradation and terrain roughness. Three methods of visual supervisory control were tested (visual
dead reckoning and two visual sevoing methods) and compared to teleoperation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as
multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with
heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human
interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated
with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial
robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real
multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of
heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The
advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from
accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking
is able to reduce operator workload.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tomorrows military systems will require novel methods for assessing Soldier performance and situational awareness
(SA) in mobile operations involving mixed-initiative systems. Although new methods may augment Soldier assessments,
they may also reduce Soldier performance as a function of demand on workload, requiring concurrent performance of
mission and assessment tasks. The present paper describes a unique approach that supports assessment in environments
approximating the operational context within which future systems will be deployed. A complex distributed system was
required to emulate the operational environment. Separate computational and visualization systems provided an
environment representative of the military operational context, including a 3D urban environment with dynamic human
entities. Semi-autonomous driving was achieved with a simulated autonomous mobility system and SA was assessed
through digital reports. A military crew station mounted on a 6-DOF motion simulator was used to create the physical
environment. Cognitive state evaluation was enabled using physiological monitoring. Analyses indicated individual
differences in temporal and accuracy components when identifying key features of potential threats; i.e., comparing
Soldiers and insurgents with non-insurgent civilians. The assessment approach provided a natural, operationally-relevant
means of assessing needs of future secure mobility systems and detecting key factors affecting Soldier-system
performance as foci for future development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of intelligent systems in today's military demands increased focus on the optimization of human-robot
interactions. Traditional studies in this domain involve large-scale field tests that require humans to operate semiautomated
systems under varying conditions within military-relevant scenarios. However, provided that adequate
constraints are employed, modeling and simulation can be a cost-effective alternative and supplement. The current
presentation discusses a simulation effort that was executed in parallel with a field test with Soldiers operating military
vehicles in an environment that represented key elements of the true operational context. In this study, "constructive"
human operators were designed to represent average Soldiers executing supervisory control over an intelligent ground
system. The constructive Soldiers were simulated performing the same tasks as those performed by real Soldiers during
a directly analogous field test. Exercising the models in a high-fidelity virtual environment provided predictive results
that represented actual performance in certain aspects, such as situational awareness, but diverged in others. These
findings largely reflected the quality of modeling assumptions used to design behaviors and the quality of information
available on which to articulate principles of operation. Ultimately, predictive analyses partially supported expectations,
with deficiencies explicable via Soldier surveys, experimenter observations, and previously-identified knowledge gaps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maturing technologies and complex payloads coupled with a future objective to reduce the logistics burden of current
unmanned aerial systems (UAS) operations require a change to the 2-crew employment paradigm. Increased automation
and operator supervisory control of unmanned systems have been advocated to meet the objective of reducing the crew
requirements, while managing future technologies. Specifically, a delegation control employment strategy has resulted in
reduced workload and higher situation awareness for single operators controlling multiple unmanned systems in
empirical studies1,2. Delegation control is characterized by the ability for an operator to call a single "play" that initiates
prescribed default actions for each vehicle and associated sensor related to a common mission goal. Based upon the
effectiveness of delegation control in simulation, the U.S. Army Aeroflightdynamics Directorate (AFDD) developed a
Delegation Control (DelCon) operator interface with voice recognition implementation for play selection, real-time play
modification, and play status with automation transparency to enable single operator control of multiple unmanned
systems in flight. AFDD successfully demonstrated delegation control in a Troops-in-Contact mission scenario at Ft.
Ord in 2009. This summary showcases the effort as a beneficial advance in single operator control of multiple UAS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Robotics Collaborative Technology Alliances (RCTA) program, which ran from 2001 to 2009, was funded by the
U.S. Army Research Laboratory and managed by General Dynamics Robotic Systems. The alliance brought together a
team of government, industrial, and academic institutions to address research and development required to enable the
deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat
vehicles. Under RCTA, three technology areas critical to the development of future autonomous unmanned systems
were addressed: advanced perception, intelligent control architectures and tactical behaviors, and human-robot
interaction. The Jet Propulsion Laboratory (JPL) participated as a member for the entire program, working four tasks in
the advanced perception technology area: stereo improvements, terrain classification, pedestrian detection in dynamic
environments, and long range terrain classification. Under the stereo task, significant improvements were made to the
quality of stereo range data used as a front end to the other three tasks. Under the terrain classification task, a multi-cue
water detector was developed that fuses cues from color, texture, and stereo range data, and three standalone water
detectors were developed based on sky reflections, object reflections (such as trees), and color variation. In addition, a
multi-sensor mud detector was developed that fuses cues from color stereo and polarization sensors. Under the long
range terrain classification task, a classifier was implemented that uses unsupervised and self-supervised learning of
traversability to extend the classification of terrain over which the vehicle drives to the far-field. Under the pedestrian
detection task, stereo vision was used to identify regions-of-interest in an image, classify those regions based on shape,
and track detected pedestrians in three-dimensional world coordinates. To improve the detectability of partially
occluded pedestrians and reduce pedestrian false alarms, a vehicle detection algorithm was developed. This paper
summarizes JPL's stereo-vision based perception contributions to the RCTA program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Velodyne HDL-64E is a 64 laser 3D (360×26.8 degree) scanning LIDAR. It was designed to fill perception
needs of DARPA Urban Challenge vehicles. As such, it was principally intended for ground use. This paper
presents the performance of the HDL-64E as it relates to the marine environment for unmanned surface vehicle
(USV) obstacle detection and avoidance. We describe the sensor's capacity for discerning relevant objects at sea-
both through subjective observations of the raw data and through a rudimentary automated obstacle detection
algorithm. We also discuss some of the complications that have arisen with the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ironically, the final frontiers for the UAV (unmanned aerial vehicle) are the closest spaces at hand. There is an urgent
operational capability gap in the area of proximate reconnaissance, intelligence, surveillance, and target acquisition
(PRISTA) as well as close quarters combats (CQC). Needs for extremely close range functionality in land, sea and urban
theaters remain unfilled, largely due to the challenges presented by the maneuverability and silent operating floor
required to address these missions. The evolution of small, nimble and inexpensive VTOL UAV assets holds much
promise in terms of filling this gap. Just as UAVs have evolved from large manned aircraft, so have MAVs (Micro
Aerial Vehicles) evolved from UAVs. As unmanned aviation evolves into aerial robotics, NAV (Nano Aerial Vehicle)
research will become the next hotbed of unmanned aerial systems development as these systems continue to mature in
response to the need to find robotic replacements for humans in PRISTA, CQC, and many other hazardous duties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a real-time pedestrian detection system which uses cues derived from structure and appearance classification
We discuss several novel ideas to achieve computational efficien y while improving on both detection and false-alarm rates:
(i) At the front end of our system we employ stereo to detect pedestrians in 3D range maps, and to classify surrounding
structure such as buildings, trees, poles etc. in the scene. The structure classificatio efficientl labels substantial amount
of non-relevant image regions and guides the further computationally expensive process to focus on relatively small image
parts; (ii) We improve the appearance-based classifier based on HoG descriptors by performing template matching with
2D human shape contour fragments that results in improved localization and accuracy; (iii) We train individual classifier
at several depth ranges that allow us to account for appearance and 2D shape changes at variable distances in front of the
camera. Our method is evaluated on publicly available datasets and is shown to match or exceed the performance of leading
pedestrian detectors in terms of accuracy as well as achieving real-time computation (10 Hz), which makes it adequate for
deployment in fiel robots and other navigation platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the
absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are
detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character
recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene
geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing
coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign
images. In this paper, ESARR development progress will be reported on, including the design and architecture, image
processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based
directional road-sign detection and interpretation system will be described along with the challenges and progress in
overcoming them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work represents the fifth in a series of studies on safe operations of unmanned ground vehicles in the proximity of
pedestrians. The U.S. Army Research Laboratory (ARL), the National Institute of Standards and Technology (NIST),
and the Robotics Collaborative Technology Alliance (RCTA) conducted the study on the campus of NIST in
Gaithersburg, MD in 2009, the final year of the RCTA.
The experiment was to assess the performance of six RCTA algorithms to detect and track moving pedestrians from
sensors mounted on a moving platform. Sensors include 2-D and 3-D LADAR, 2-D SICK, and stereovision. Algorithms
reported only detected human tracks. NIST ground truth methodology was used to assess the algorithm-reported
detections as to true positive, misclassification, or false positive as well as distance to first detection and elapsed tracking
time. A NIST-developed viewer facilitated real-time data checking and subsequent analysis. Factors of the study include
platform speed, pedestrian speed, and clutter density in the environment. Pedestrian motion was choreographed to ensure
similar perspective from the platform regardless of experimental conditions. Pedestrians were upright in the principal
study, but excursions examined group movement, nonlinear paths, occluded paths, and alternative postures.
We will present the findings of this study and benchmark detection and tracking for subsequent robotic research in this
program. We also address the impact of this work on pedestrian avoidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor
pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an
integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel
versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch
operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle
unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and
automatic detection algorithms is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a
replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for
robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a
TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and
removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display,
replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed
focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness
as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the principal challenges in autonomous navigation for mobile ground robots is collision avoidance, especially in
dynamic environments featuring both moving and non-moving (static) obstacles. Detecting and tracking moving objects
(such as vehicles and pedestrians) presents a particular challenge because all points in the scene are in motion relative to
a moving platform. We present a solution for detecting and robustly tracking moving objects from a moving platform.
We use a novel epipolar Hough transform to identify points in the scene which do not conform to the geometric
constraints of a static scene when viewed from a moving camera. These points can then be analyzed in three different
domains: image space, Hough space and world space, allowing redundant clustering and tracking of moving objects. We
use a particle filter to model uncertainty in the tracking process and a multiple-hypothesis tracker with lifecycle
management to maintain tracks through occlusions and stop-start conditions. The result is a set of detected objects whose
position and estimated trajectory are continuously updated for use by path planning and collision avoidance systems. We
present results from experiments using a mobile test robot with a forward looking stereo camera navigating among
multiple moving objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a framework, Cognitive Object Recognition System (CORS), inspired by
current neurocomputational models and psychophysical research in which multiple recognition
algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based
algorithms) are integrated to provide a comprehensive solution to object recognition and
landmarking. Objects are defined as a combination of geons, corresponding to their simple parts,
and the relations among the parts. However, those objects that are not easily decomposable into
geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The
unique interaction between these algorithms is a novel approach that combines the effectiveness of
both algorithms and takes us closer to a generalized approach to object recognition. CORS allows
recognition of objects through a larger range of poses using geometric primitives and performs
well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon
composition of an object allows image understanding and reasoning even with novel objects. With
reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied
environments. Feasibility of the CORS system was demonstrated with real stereo images captured
from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans
and other relevant landmarks in the indoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robots and other unmanned systems will play many critical roles in support of a human presence on Mars, including
surveying candidate landing sites, locating ice and mineral resources, establishing power and other infrastructure,
performing construction tasks, and transporting equipment and supplies. Many of these systems will require much more
strength and power than exploration rovers. The presence of humans on Mars will permit proactive maintenance and
repair, and allow teleoperation and operator intervention, supporting multiple dynamic levels of autonomy, so the critical
challenges to the use of unmanned systems will occur before humans arrive on Mars. Nevertheless, installed
communications and navigation infrastructure should be able to support structured and/or repetitive operations (such as
excavation, drilling, or construction) within a "familiar" area with an acceptable level of remote operator intervention.
This paper discusses some of the factors involved in developing and deploying unmanned systems to make humans' time
on Mars safer and more productive, efficient, and enjoyable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to precisely emplace stand-alone payloads in hostile territory has long been on the wish list of US
warfighters. This type of activity is one of the main functions of special operation forces, often conducted at great
danger. Such risk can be mitigated by transitioning the manual placement of payloads over to an automated placement
mechanism by the use of the Automatic Payload Deployment System (APDS). Based on the Automatically Deployed
Communication Relays (ADCR) system, which provides non-line-of-sight operation for unmanned ground vehicles by
automatically dropping radio relays when needed, the APDS takes this concept a step further and allows for the delivery
of a mixed variety of payloads. For example, payloads equipped with a camera and gas sensor in addition to a radio
repeater, can be deployed in support of rescue operations of trapped miners. Battlefield applications may include
delivering food, ammunition, and medical supplies to the warfighter. Covert operations may require the unmanned
emplacement of a network of sensors for human-presence detection, before undertaking the mission. The APDS is well
suited for these tasks. Demonstrations have been conducted using an iRobot PackBot EOD in delivering a variety of
payloads, for which the performance and results will be discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous small UGVs have the potential to greatly increase force multiplication capabilities for infantry units. In
order for these UGVs to be useful on the battlefield, they must be able to operate under all-weather conditions. For the
Daredevil Project, we have explored the use of ultra-wideband (UWB) radar, LIDAR, and stereo vision for all-weather
navigation capabilities. UWB radar provides the capability to see through rain, snow, smoke, and fog. LIDAR and
stereo vision provide greater accuracy and resolution in clear weather but has difficulty with precipitation and
obscurants. We investigate the ways in which the sensor data from UWB radar, LIDAR, and stereo vision can be
combined to provide improved performance over the use of a single sensor modality. Our research includes both
traditional sensor fusion, where data from multiple sensors is combined in a single representation, and behavior-based
sensor fusion, where the data from one sensor is used to activate and deactivate behaviors using other sensor modalities.
We use traditional sensor fusion to combine LIDAR and stereo vision for improved obstacle avoidance in clear air, and
we use behavior-based sensor fusion to select between radar-based and LIDAR/vision-based obstacle avoidance based
on current environmental conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Man portable robots have been fielded extensively on the battlefield to enhance mission effectiveness of soldiers in
dangerous conditions. The robots that have been deployed to date have been teleoperated. The development of assistive
behaviors for these robots has the potential to alleviate the cognitive load placed on the robot operator. While full
autonomy is the eventual goal, a range of assistive capabilities such as obstacle detection, obstacle avoidance, waypoint
navigation, can be fielded sooner in a stand-alone fashion. These capabilities increase the level of autonomy on the
robots so that the workload on the soldier can be reduced.
The focus of this paper is on the design and execution of a series of scientifically rigorous experiments to quantifiably
assess operator performance when operating a robot equipped with some of these assistive behaviors. The experiments
helped to determine a baseline for teleoperation and to evaluate the benefit of Obstacle Detection and Obstacle
Avoidance (OD/OA) vs. teleoperation and OD/OA with Open Space Planning (OSP) vs. teleoperation. The results of
these experiments are presented and analyzed in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tunnels are a challenging environment for radio communications. In this paper we consider the use of autonomous
mobile radio nodes (AMRs) to provide wireless tethering between a base station and a leader in a tunnel exploration
scenario. Using a realistic, experimentally-derived underground radio signal propagation model and a tethering
algorithm for AMR motion control based on a consensus variable protocol, we present experimental results involving a
tele-operated leader with one or two followers. Using radio signal strength measurements, the followers autonomously
space themselves so as to achieve equal radio distance between each entity in the chain from the base to the leader.
Results show the feasibility of our ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a concept towards integrating manned and Unmanned Aircraft Systems (UASs) into a highly
functional team though the design and implementation of 3-D distributed formation/flight control algorithms with the
goal to act as wingmen for a manned aircraft. This method is designed to minimize user input for team control,
dynamically modify formations as required, utilize standard operating formations to reduce pilot resistance to
integration, and support splinter groups for surveillance and/or as safeguards between potential threats and manned
vehicles. The proposed work coordinates UAS members by utilizing artificial potential functions whose values are
based on the state of the unmanned and manned assets including the desired formation, obstacles, task assignments, and
perceived intentions. The overall unmanned team geometry is controlled using weighted potential fields. Individual
UAS utilize fuzzy logic controllers for stability and navigation as well as a fuzzy reasoning engine for flight path
intention prediction. Approaches are demonstrated in simulation using the commercial simulator X-Plane and
controllers designed in Matlab/Simulink. Experiments include trail and right echelon formations as well as splinter group
surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For unmanned systems, it is desirable to have some sort of fault tolerant ability in order to accomplish the mission.
Therefore, in this paper, the fault tolerant control of a formation of nonholonomic mobile robots in the presence
unknown faults is undertaken. Initially, a kinematic/torque leader-follower formation control law is developed for the
robots under the assumption of normal operation, and the stability of the formation is verified using Lyapunov theory.
Subsequently, the control law for the formation is modified by incorporating an additional term, and this new control law
compensates the effects of the faults. Moreover, the faults could be incipient or abrupt in nature. The additional term
used in the modified control law is a function of the unknown fault dynamics which are recovered using the online
learning capabilities of online approximators. Additionally, asymptotic convergence of the FDA scheme and the
formation errors in the presence of faults is shown using Lyapunov theory. Finally, numerical results are provided to
verify the theoretical conjectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Presented here is a motion planning scheme for enabling a quadrotor to serve as an autonomous communications
relay in indoor/GPS-denied environments. Using antenna selection diversity, the quadrotor is able to optimize its
location in the communication chain so as to maximize the link throughput. Measurements of the communications
field drive a gradient descent algorithm that moves the quadrotor to an optimal location while avoiding obstacles,
all without the use of positioning data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive.
Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and
so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation.
However, using that approach, players cannot change their objectives online in real time without calling for a
completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning
optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for
instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This
allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hopping robots provide the possibility of breaking the link between the size of a ground vehicle and the largest obstacle that it can overcome. For more than a decade, DARPA and Sandia National Laboratories have been developing small-scale hopping robot technology, first as part of purely hopping platforms and, more recently, as part of platforms that are capable of both
wheeled and hopping locomotion. In this paper we introduce the Urban Hopper robot and summarize its capabilities. The advantages of hopping for overcoming certain obstacles are discussed. Several configurations of the Urban Hopper are described, as are intelligent
capabilities of the system. Key challenges are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The All-Terrain Biped (ATB) robot is an unmanned ground vehicle with arms, legs and wheels designed to drive, crawl,
walk and manipulate objects for inspection and explosive ordnance disposal tasks. This paper summarizes on-going
development of the ATB platform. Control technology for semi-autonomous legged mobility and dual-arm dexterity is
described as well as preliminary simulation and hardware test results. Performance goals include driving on flat terrain,
crawling on steep terrain, walking on stairs, opening doors and grasping objects. Anticipated benefits of the adaptive
mobility and dexterity of the ATB platform include increased robot agility and autonomy for EOD operations, reduced
operator workload and reduced operator training and skill requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many infantry operations in urban environments, such as building clearing, are extremely dangerous and difficult and
often result in high casualty rates. Despite the fast pace of technological progress in many other areas, the tactics and
technology deployed for many of these dangerous urban operation have not changed much in the last 50 years. While
robots have been extremely useful for improvised explosive device (IED) detonation, under-vehicle inspection,
surveillance, and cave exploration, there is still no fieldable robot that can operate effectively in cluttered streets and
inside buildings.
Developing a fieldable robot that can maneuver in complex urban environments is challenging due to narrow corridors,
stairs, rubble, doors and cluttered doorways, and other obstacles. Typical wheeled and tracked robots have trouble
getting through most of these obstacles. A bipedal humanoid is ideally shaped for many of these obstacles because its
legs are long and skinny. Therefore it has the potential to step over large barriers, gaps, rocks, and steps, yet squeeze
through narrow passageways, and through narrow doorways. By being able to walk with one foot directly in front of the
other, humanoids also have the potential to walk over narrow "balance beam" style objects and can cross a narrow row
of stepping stones.
We describe some recent advances in humanoid robots, particularly recovery from disturbances, such as pushes and
walking over rough terrain. Our disturbance recovery algorithms are based on the concept of Capture Points. An N-Step
Capture Point is a point on the ground in which a legged robot can step to in order to stop in N steps. The N-Step
Capture Region is the set of all N-Step Capture Points. In order to walk without falling, a legged robot must step
somewhere in the intersection between an N-Step Capture Region and the available footholds on the ground. We present
results of push recovery using Capture Points on our humanoid robot M2V2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here we present a real-time algorithm for on-board SLAM (simultaneous localization and mapping) of a quadrotor
using a laser range finder. Based on successfully implemented techniques for ground robots, we developed
an algorithm that merges a new scan into the global map without any iteration. This causes some inaccuracy
of the global map which leads to an error propagation during the robot's mission. Therefore an optimization
algorithm reducing this inaccuracy is essential. Within this optimization lines with the same orientation and an
overlapping in one of the two possible coordinates of a 2D-plane are merged if their distance is below a certain
threshold value.
Due to reduction of the required computing power for SLAM calculation by using orthogonal SLAM a real time SLAM running on a microcontroller becomes possible. Because of the small weight and the low electric power consumption, this controller can be mounted on an industrial quadrotor. Therefore acting autonomously in an unknown indoor environment becomes possible.
In this paper we also show the validation of the presented SLAM algorithm. The first step of validation is an offline implementation in Matlab and the second step is the online validation of our algorithm on the industrial quadrotor AR100B of the AirRobot Company.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can
be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B
in the least amount of time, your control scheme will be different from if your goal is to get there using
the least amount of energy. In this paper, we seek a balance between these extremes by looking at both
efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum
Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between
these control extremes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The R-Gator is an unmanned ground vehicle built on the John Deere 6x4 M-Gator utility vehicle chassis. The vehicle is
capable of operating in urban and off-road terrain and has a large payload to carry supplies, wounded, or a marsupial
robot. The R-Gator has 6 modes of operation: manual driving, teleoperation, waypoint, direction drive, playback and
silent sentry. In direction drive the user specifies a direction for the robot. It will continue in that direction, avoiding
obstacles, until given a new direction. Playback allows previously recorded paths, from any other mode including
manual, to be played back and repeated. Silent sentry allows the engine to be turned off remotely while cameras,
computers and comms remain powered by batteries. In this mode the vehicle stays quiet and stationary, collecting
valuable surveillance information. The user interface consists of a wearable computer, monocle and standard video game
controller. All functions of the R-Gator can be controlled by the handheld game controller, using at most 2 button
presses. This easy to use user interface allows even untrained users to control the vehicle. This paper details the systems
developed for the R-Gator, focusing on the novel user interface and the obstacle detection system, which supports
safeguarded teleoperation as well as full autonomous operation in off-road terrain. The design for a new 4-wheel,
independent suspension chassis version of the R-Gator is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Until recently, the Army Future Combat System (FCS) was the future of Army ground robotics hallmarked by system
of systems interoperability for manned and unmanned platforms. New missions, threats, and realities have caused the
Army to restructure the Army Future Combat System, but still require unmanned systems interoperability without the
FCS system of system interoperability architecture. The result is the Army material developer has no overarching
unmanned ground vehicle (UGV) interoperability standards in place equal to the Army unmanned aircraft system
(UAS) community. This paper will offer a Life After the FCS vision for an Army family of common ground robotics
and payload standards with proposed IEEE, STANAG, SAE, and other standards to potentially achieve common
ground robotics interoperability to support the Army and Army Maneuver Support Center of Excellence (MSCoE)
Chemical, Engineer, and Military Police mission needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-Agent Tactical Sentry Unmanned Ground Vehicle, developed at Defence R&D Canada - Suffield, has
been in service with the Canadian Forces for five years. This tele-operated wheeled vehicle provides a capability
for point detection of chemical, biological, radiological, and nuclear agents. Based on user experience, it is
obvious that a manipulator capability would greatly enhance the vehicle's utility and increase its mobility in
urban terrain. This paper details technical components of this development, and describes a number of trials
undertaken to perform tasks with a manipulator arm such as picking up objects, opening vehicle and building
doors, recording video, and creating 3D models of the environment. The lessons learned from these trials will
guide further development of the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Robotics Rodeo held from 31 August to 3 September 2009 at Fort Hood, Texas, had three stated goals: educate key
decision makers and align the robotics industry; educate Soldiers and developers; and perform a live market survey of
the current state of technologies to encourage the development of robotic systems to support operational needs.
Both events that comprised the Robotics Rodeo, the Extravaganza and the robotic technology observation, demonstration
and discussion (RTOD2) addressed these stated goals. The Extravaganza was designed to foster interaction between the
vendors and the visitors who included the media, Soldiers, others in the robotics industry and key decision makers. The
RTOD2 allowed the vendors a more private and focused interaction with the subject matter experts teams, this was the
forum for the vendors to demonstrate their robotic systems that supported the III Corps operational needs statements that
are focused on route clearance, convoy operations, persistent stare, and robotic wingman.
While the goals of the Rodeo were achieved, the underlying success from the event is the development of a new business
model that is focused on collapsing the current model to get technologies into the hands of our warfighters quicker. This
new model takes the real time data collection from the Rodeo, the Warfighter Needs from TRADOC, the emerging
requirements from our current engagements, and assistance from industry partners to develop a future Army strategy for
the rapid fielding of unmanned systems technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dismounted soldiers are clearly at the centre of modern asymmetric conflicts and unmanned systems of the future will
play important roles in their support. Moreover, the nature of modern asymmetric conflicts requires dismounted soldiers
to operate in urban environments with challenges of communication and limited situational awareness. To improve the
situational awareness of dismounted soldiers in complex urban environments, Defence R&D Canada - Suffield (DRDC
Suffield) envision Unmanned Air Vehicles (UAV) rotorcraft and Unmanned Ground Vehicles (UGV) cooperating in the
battlespace. The capabilities provided to the UAV rotorcraft will include high speed maneuvers through urban terrain, overthe-
horizon and loss of communications operations, and/or low altitude over-watch of dismounted units. This information
is shared with both the dismounted soldiers and UGV. The man-sized, man-mobile UGV operates in close support to
dismounted soldiers to provide a payload carrying capacity. Some of the possible payloads include chemical, biological,
radiological and nuclear (CBRN) detection, intelligence, surveillance and reconnaissance (ISR), weapons, supplies, etc..
These unmanned systems are intended to increase situational awareness in urban environments and can be used to call
upon nearby forces to react swiftly by providing acquired information to concentrate impact where required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-robotics places a potentially large number of independent robotic agents in a given situation where they may
interact cooperatively toward a common task. However, by having each of these robotic agents essentially identical, the
overall scope of their mission is limited. By fractionating their capabilities with varying degrees of specialization in a
hierarchical fashion, the mission capability can be greatly expanded. Test prototype examples are discussed of a large
carrier robotic vehicle containing several smaller specialized robotic vehicles, all teleoperated, which can interact
cooperatively, both sequentially and in parallel toward a common task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here,
we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly
better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule"
following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is
instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present
an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the
leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors
caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those
errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic Mounted Detection System (RMDS) is a government program to enable robotic control of a Husky route
clearance vehicle with a mine detection sensor payload. The goal is for the operator to control the Husky and mine
detection sensor from another vehicle. This program will provide the user with standard tele-operation control of the
vehicle as well as semi-autonomous modes including cruise control, precision waypoint navigation with operator error
correction and a visual mode allowing the operator to enter waypoints in the current video feed. The use of autonomy
will be tailored to give the operator maximum control of the robotic vehicle's path while minimizing the effort required
to maintain the desired route. Autonomous alterations of the path would conflict with the goal of route clearance, so
waypoint navigation will allow the operator to supply offsets to counteract location errors. While following a waypoint
path, the Husky will be capable of controlling its speed to maintain an operator specified distance from the control
vehicle. Obstacle avoidance will be limited to protecting the mine detection sensor, leaving any decision to leave the
path up to the operator. Video will be the primary navigational sensor feed to the operator, who will use an augmented
steering wheel controller and computer display to control the Husky. A LADAR system will be used to detect obstacles
that could damage the mine sensor and to maintain the optimal sensor orientation while the vehicle is moving. Practical
issues and lessons learned during integration will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a pose estimation method based on a 3D camera - the SwissRanger SR4000. The proposed method
estimates the camera's ego-motion by using intensity and range data produced by the camera. It detects the SIFT (Scale-
Invariant Feature Transform) features in one intensity image and match them to that in the next intensity image. The
resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the
attitude and position changes between the two image frames are determined. The method uses feature descriptors to
perform feature matching. It works well with large image motion between two frames without the need of spatial
correlation search. Due to the SR4000's consistent accuracy in depth measurement, the proposed method may achieve a
better pose estimation accuracy than a stereovision-based approach. Another advantage of the proposed method is that
the range data of the SR4000 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it
possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the
pose estimation method and characterize the method's pose estimation performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of current small arms ammunition requires the use of radial and lateral accelerations to permit the inclusion
of current Micro Electro Mechanical Systems (MEMS). Research at Louisiana Tech's Institute for Micromanufacturing
into equipping small arms with MEMS technology has led to the development of a new type of small arms system. This
ammunition is able to accelerate outside of its barrel, thereby decreasing the required acceleration for a specified
maximum velocity. Additionally, the design of this ammunition eliminates the lateral accelerations typically required to
stabilize current small arms ammunition, and permits the inclusion of non-metallic barrels and other components. A
review of the current design and performance standards of this ammunition is presented, along with the current MEMS
technology being tested for inclusion into this ammunition. A review of new armament systems, capabilities, and
applications as a result of these advances is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the initial results of an investigation into building unmanned aerial vehicles (UAVs) with
pressurized structures-based (PSB) technologies. Basically, the UAV will be constructed in such a way that a
considerable percentage of its weight will be supported by or composed of inflatable structures containing air or helium.
PSB technologies can be employed in any number of UAV designs. The goals of this research are to ascertain feasibility
of UAV construction using PSB technology and finding methods and designs employing PSB technology to increase
vehicle performance for missions of interest to the military.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To make robotic grasping accessible to all roboticists, Energid Technologies is developing a Graphical User Interface
(GUI) tool and algorithms embodied in a reusable software toolkit to quickly and easily create grasps. The method is
generic and works with all types of robotic hands, manipulators, and mobile platforms. Vision, position control, force
control, and collision avoidance algorithms are integrated naturally into the process, and successful grasp parameters are
stored in a database for later real-time application. This article describes how the grasps are created in the Energid
system using convenient human interfaces, novel ways to constrain the robotic hand, and real-time simulation of the
grasping process. Special emphasis is given to the integration of force control with the grasp scripting process. The force
control system accommodates a variety of established algorithms and allows new user-defined algorithms, which can
apply to many types of force/torque sensors. Special emphasis is also given to vision-based tracking, with the vision
system providing object identification and automatic selection of an appropriate grasp from the database. The vision
system also provides 3D tracking to guide the grasp process. Simulation and hardware study results are presented based
on the Schunk SDH hand and LWA arm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new approach for precision indoor tracking of tele-operated robots, called "Heuristics-Enhanced
Dead-reckoning" (HEDR). HEDR does not rely on GPS, or external references; it uses odometry and a low-cost MEMS-based
gyro. Our method corrects heading errors incurred by the high drift rate of the gyro by exploiting the structured
nature of most indoor environments, but without having to directly measure features of the environment. The only
operator feedback offered by most tele-operated robots is the view from a low to the ground onboard camera. Live video
lets the operator observe the robot's immediate surroundings, but does not establish the orientation or whereabouts of the
robot in its environment. Mentally keeping track of the robot's trajectory is difficult, and operators easily become
disoriented. Our goal is to provide the tele-operator with a map view of the robot's current location and heading, as well
as its previous trajectory, similar to the information provided by an automotive GPS navigation system. This frees tele-operators
to focus on controlling the robot and achieving other mission goals, and provides the precise location of the
robot if it becomes disabled and needs to be recovered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper adresses the issue of generating a panoramic view and a panoramic depth maps using only a single camera. The
proposed approach first estimates the egomotion of the camera. Based on this information, a particle filter approximates
the 3D structure of the scene. Hence, 3D scene points are modeled probabilistically. These points are accumulated in a
cylindric coordinate system. The probabilistic representation of 3D points is used to handle the problem of visualizing
occluding and occluded scene points in a noisy environment to get a stable data visualization. This approach can be easily
extended to calibrated multi-camera applications (even with non-overlapping field of views).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Uncalibrated stereo imagery experimental and analytical results are presented for path planning and navigation. An
Army Research and Development Engineering Command micro-size UAV was outfitted with two commercial cameras
and flown over varied landscapes. Polaris Sensor Technologies processed the data post flight with an image
correspondence algorithm of their own design. Stereo disparity (depth) was computed despite a quick assembly, image
blur, intensity saturation, noise and barrel distortion. No camera calibration occurred. Disparity maps were computed at
a processing rate of approximately 5 seconds per frame to improve perception. Disparity edges (treeline to ground, voids
and plateaus) were successfully observed and confirmed to be properly identified. Despite the success of localizing this
disparity edges sensitivity to saturated pixels, lens distortion and defocus were strong enough to overwhelm more subtle
features such as the contours of the trees, which should be possible to extract using this algorithm. These factors are
being addressed. The stereo data is displayed on a flat panel 3D display well suited for a human machine interface in
field applications. Future work will entail extraction of intelligence from acquired data and the overlay of such data on
the 3D image as displayed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the exploitation of energy harvesting technologies for teams of Autonomous Vehicles (AVs).
Traditionally, the optimisation of information gathering tasks such as searching for and tracking new objects, and
platform level power management, are only integrated at a mission-management level. In order to truly exploit new
energy harvesting technologies which are emerging in both the commercial and military domains (for example the
'EATR' robot and next-generation solar panels), the sensor management and power management processes must be
directly coupled. This paper presents a novel non-myopic sensor management framework which addresses this issue
through the use of a predictive platform energy model. Energy harvesting opportunities are modelled using a dynamic
spatial-temporal energy map and sensor and platform actions are optimised according to global team utility. The
framework allows the assessment of a variety of different energy harvesting technologies and perceptive tasks. In this
paper, two representative scenarios are used to parameterise the model with specific efficiency and energy abundance
figures. Simulation results indicate that the integration of intelligent power management with traditional sensor
management processes can significantly increase operational endurance and, in some cases, simultaneously improve
surveillance or tracking performance. Furthermore, the framework is used to assess the potential impact of energy
harvesting technologies at various efficiency levels. This provides important insight into the potential benefits that
intelligent power management can offer in relation to improving system performance and reducing the dependency on
fossil fuels and logistical support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been increasing interest during the last several years in the development of unmanned vehicles. A large
number of such vehicles are soon going to play a major role in defense and security in a battlefield environment. The
objective of the present paper is to ascertain the overall reliability of a large number of unmanned vehicles in the
battlefield. The problem is broken up into two parts, collaboration and coordination of unmanned vehicle network.
Collaboration is the communication between a set of unmanned vehicles which are likely to move in a group.
Coordination is the movement of one group of unmanned vehicle from one source node to another destination node
keeping in view the obstacles and the difficulties in the movement of path. This paper utilizes the existing well known
techniques in the literature for finding the node and terminal reliabilities. These can further be used to obtain the system
reliability of unmanned vehicle network. Fuzzy rules based on experience from past are suggested for the
implementations. A simulation of ground vehicle network having node, branch and terminal simulations is given. It is
hoped that the technique proposed here will prove useful in developing future approaches for ascertaining overall
reliability of unmanned ground networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-channel, agile, computationally enhanced camera based on the PANOPTES architecture is
presented. Details of camera operational concepts are outlined. Preliminary image acquisition results
and an example of super-resolution enhancement of captured data are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.