PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper takes a look at future requirements for military robotic systems, and translates those requirements to the appropriate technologies that must evolve to support mission operations. The effort capitalizes upon the Integrated Concept Team (U.S. Army Training and Doctrine Command) requirements database and applies a technology assessment database to formulate a technology roadmap for help in extended long range planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses nearly-autonomous operation of unmanned ground vehicles and mobile unattended ground sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of an Unmanned Ground Vehicle (UGV) to successfully move about in its environment is enabled by the synergistic combination of perception, control and platform (mobility and utility). Vast effort is being expended on the former technologies but little demonstrable evidence has been produced to indicate that the latter (mobility/utility) has been considered as an integral part of the UGV systems level capability; a concept commonly referred to as intrinsic mobility. While past work described the rationale for hybrid locomotion, this paper aims to demonstrate that integrating intrinsic mobility into a UGV systems mobility element or 'vehicle' will be a key contributor to the magnitude of autonomy that the system can achieve. This paper serves to provide compelling evidence that 1) intrinsic mobility improvements provided by hybrid locomotion configurations offer the best generic mobility, that 2) strict attention must be placed on the optimization of both utility (inherent vehicle capabilities) and mobility and that 3) the establishment of measures of performance for unmanned vehicle mobility is an unmet and latent need.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A six-wheeled autonomous omni-directional vehicle (ODV) called T3 has been developed at Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS). This paper focuses on T3's ability to climb stairs using its unique configuration of 6 independently driven and steered wheels and active suspension height control. The ability of T3, or any similar vehicle, to climb stairs is greatly dependent on the chassis orientation relative to the stairs. Stability criteria is developed for any vehicle dimensions and orientation, on any staircase. All possible yaw and pitch angles on various staircases are evaluated to find vehicle orientations that will allow T3 to climb with the largest margin of stability. Different controller types are investigated for controlling vertical wheel movement with the objective of keeping all wheels in contact with the stairs, providing smooth load transfer between loaded and unloaded wheels, and maintaining optimum chassis pitch and roll angles. A controller is presented that uses feedback from wheel loading, vertical wheel position, and chassis orientation sensors. The implementation of the controller is described, and T3's stair climbing performance is presented and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method of acquiring behaviorist-based reactive control strategies for an autonomous skid-steer robot operating in an unknown environment. First, a detailed interactive simulation of the robot (including simplified vehicle kinematics, sensors and a randomly generated environment) is developed with the capability of a human driver supplying all control actions. We then introduce a new modular, neural-fuzzy system called Threshold Fuzzy Systems (TFS). A TFS has two unique features that distinguish it from traditional fuzzy logic and neural network systems; (1) the rulebase of a TFS contains only single antecedent, single consequence rules, called a Behaviorist Fuzzy Rulebase (BFR) and (2) a highly structured adaptive node network, called a Rule Dominance Network (RDN), is added to the fuzzy logic inference engine. Each rule in the BFR is a direct mapping of an input sensor to a system output. Connection nodes in the RDN occur when rules in the BFR are conflicting. The nodes of the RDN contain functions that are used to suppress the output of other conflicting rules in the BFR. Supervised training, using error backpropagation, is used to find the optimal parameters of the dominance functions. The usefulness of the TFS approach becomes evident when examining an autonomous vehicle system (AVS). In this paper, a TFS controller is developed for a skid-steer AVS. Several hundred simulations are conducted and results for the AVS with a traditional fuzzy controller and with a TFS controller are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While clearly necessary, geometric information is not sufficient to insure successful navigation in outdoor environments. Many barriers to navigation cannot be represented in a three dimensional geometric model alone. Barriers such as soft ground, snow, mud, loose sand, compliant vegetation, debris hidden in vegetation and annoyances such as small ruts and washboard effects do not appear in geometric representations. The difficulty of offline specification and changing nature of terrain characteristics requires that solutions be capable of learning without prior information and able to adapt as environmental conditions change. This paper will discuss the ongoing and proposed work the Learned Trafficability Models (LTMs) program at the Defence Research Establishment Suffield (DRES) of the Canadian Department of National Defence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design problems in the development of a spherical mobile robot are discussed in this paper. These problems include dynamics and design of the propulsion mechanism, motion planning and control problems, actuator selection and sensor placement, design and fabrication of the exo-skeleton, and other issues related to power management and computing. Each of the problems are discussed in brief and presented in relation to the spherical mobile robot currently under development at Michigan State University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute leader/follower behaviors, with one or more followers tracking the pat+6|+ken by a leader. The key challenges to enabling such a capability are (1) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path-following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package for a small urban robot. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the results of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small air and ground physical agents (robots) will be ubiquitous on the battlefield of the 21st century, principally to lower the exposure to harm of our ground forces in urban and open terrain scenarios. Teams of small collaborating physical agents conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA), intelligence, chemical and biological agent detection, logistics, decoy, sentry; and communications relay will have advanced sensors, communications, and mobility characteristics. It is anticipated that there will be many levels of individual and team collaboration between the soldier and robot, robot to robot, and robot to mother ship. This paper presents applications and infrastructure components that illustrate each of these levels. As an example, consider the application where a team of twenty small robots must rapidly explore and define a building complex. Local interactions and decisions require peer to peer collaboration. Global direction and information fusion warrant a central team control provided by a mother ship. The mother ship must effectively deliver/retrieve, service, and control these robots as well as fuse the information gathered by these highly mobile robot teams. Any level of collaboration requires robust communications, specifically a mobile ad hoc network. The application of fixed ground sensors and mobile robots is also included in this paper. This paper discusses on going research at the U.S. Army Research Laboratory that supports the development of multi-robot collaboration. This research includes battlefield visualization, intelligent software agents, adaptive communications, sensor and information fusion, and multi-modal human computer interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of an articulated link for two mobile, articulated vehicles. The goal of the project is to show enhanced mobility of coupled platforms in an environment with barriers and obstacles. To achieve enhanced mobility, actuated linkages are used to couple the vehicles. The actuated linkages are high strength, hydraulically coupled, two degree-of-freedom devices with a universal coupler mounted on the end. When one platform is mated to a companion platform, the couplers lock, and the pair of vehicles can operate as a single unit. The robotic platform is based on a modified All Terrain Vehicle (ATV), which affords a robust platform with a minimum of cost. The vehicles are controlled using a wireless controller, and can be operated individually in a teleoperative mode, or can be joined and operated as a single unit. The method for link-up is semi-autonomous, meaning that within a certain distance, the two robots can align themselves and move together, locking the couplers on the end of the articulated link. Vehicle position is determined using standard cameras mounted on the platform that use machine vision algorithms to find volumetric position of the couplers quickly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the difficult and interesting problem of real-time planning and efficient motion control of rigid multi-robot systems to dynamically avoid moving obstacles and/or robots as these move. Most techniques described so far in the literature deal with the simpler problem of generating (generally off-line) a path among stationary obstacles. For practical purposes, however, obstacles are not always static and most interesting environments are in general not known precisely and/or time varying. Motivated by the fact that robots capable of maneuvering among moving obstalces will be capable of accomplishing a much larger and more versatile class of tasks, and as a result of our current and past research investigations over the years, we present an innovative approach and tackle the problem from a different angle. The proposed method follows upon our previous work which employs a notion of complex potential fields representation, and involves the use of state variables which preserve the Newman Boundary condition of such complex potential fields while allowing mobile robots to autonomously and dynamically avoid each other as well as other moving obstacles. In the heart of the technique is the exploitation of the powerful and fundamental tool of conformal mapping to derive the path solution for obstacles of arbitrary shapes. The advantage this novel approach offers over traditional formulations is its handling of both static and arbitrary moving obstacles/multi-robots. To the author's best knowledge, the proposed technique is the only proposed approach in the literature for real-time mobile robots motion control and obstacles avoidance which not only guarantees the reaching of the robot's goal under some conditions but also focuses on the means by which both the obstacles positions and orientations can elegantly and efficiently be dealt with when these latter continuously change with time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we have discussed technical issues regarding supervised tactical mobility modeling and control of multi- agent robotic vehicles operating in unstructured environment. An integrated Supervisory Mobility Controller (SMC) has been developed that accommodates cooperative deployment of robotic vehicles in different modes of operation. Tactical behaviors of the mobile robots are initially modeled, tested, and validated in simulation environment under FMCell robotic software and then implemented into the SMC. The controller is used for development tactical mobility schemes for a group of six small-scale intelligent autonomous robot equipped with a variety of basic navigational sensors. In this paper, we describe functional and modular architecture of the Supervisory Mobility Controller and present some of our strategies for separation of supervisory functions according to their complexity, precedence, and intelligence. Furthermore, we have discussed intelligent schemes for supervised maneuverability control of our cooperative robotic vehicles. Some examples demonstrating practical applications of the newly developed techniques for military UGV's applications have been presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important need in multi-robot systems is the development of mechanisms that enable robot teams to autonomously generate cooperative behaviors. This paper first briefly presents the Cooperative Multi-robot Observation of Multiple Moving Targets (CMOMMT) application as a rich domain for studying the issues of multi-robot learning of new behaviors. We discuss the results of our hand-generated algorithm for CMOMMT, and then describe our research in generating multi-robot learning techniques for the CMOMMT application, comparing the results to the hand-generated solutions. Our results show that, while the learning approach performs better than random, naive approaches, much room still remains to match the results obtained from the hand-generated approach. The ultimate goal of this research is to develop techniques for multi-robot learning and adaptation that will generalize to cooperative robot applications in many domains, thus facilitating the practical use of multi-robot teams in a wide variety of real-world applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how decentralized control theory can be used to control multiple cooperative robotic vehicles. Models of cooperation are discussed and related to the input/output reachability and structural observability and controllability of the entire system. Whereas decentralized control research in the past has concentrated on using decentralized controllers to partition complex physically interconnected systems, this work uses decentralized methods to connect otherwise independent non-touching robotic vehicles so that they behave in a stable, coordinated fashion. A vector Liapunov method is used to prove stability of a single example: the controlled motion of multiple vehicles along a line. The results of this stability analysis have been implemented on two applications: a robotic perimeter surveillance system and self-healing minefield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sandia National Laboratories have developed a squad of robotic vehicles as a test-bed for investigating cooperative control strategies. The squad consists of eight RATLER vehicles and a command station. The RATLERs are medium-sized all-electric vehicles containing a PC104 stack for computation, control, and sensing. Three separate RF channels are used for communications; one for video, one for command and control, and one for differential GPS corrections. Using DGPS and IR proximity sensors, the vehicles are capable of autonomously traversing fairly rough terrain. The control station is a PC running Windows NT. A GUI has been developed that allows a single operator to task and monitor all eight vehicles. To date, the following mission capabilities have been demonstrated: 1. Way-Point Navigation, 2. Formation Following, 3. Perimeter Surveillance, 4. Surround and Diversion, and 5. DGPS Leap Frog. This paper describes the system and briefly outlines each mission capability. The DGPS Leap Frog capability is discussed in more detail. This capability is unique in that it demonstrates how cooperation allows the vehicles to accurately navigate beyond the RF communication range. One vehicle stops and uses its corrected GPS position to re-initialize its receiver to become the DGPS correction station for the other vehicles. Error in position accumulates each time a new vehicle takes over the DGPS duties. The accumulation in error is accurately modeled as a random walk phenomenon. This paper demonstrates how useful accuracy can be maintained beyond the vehicle's range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-robots may soon be available for deployment by the thousands. Consequently, controlling and coordinating a force this large to accomplish a prescribed task is of great interest. This paper describes a flexible architecture for deploying thousands of autonomous robots simultaneously. The robots' behavior is based on a subsumption architecture in which individual behaviors are prioritized with respect to all others. The primary behavior explored in this paper is group formation behavior drawn from the work in social potential fields applications conducted by Reif and Wang, and Dudehoeffer and Jones. While many papers have examined the application of social potential fields in a simulation environment, this paper describes the implementation of this behavior in a collective of small robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perhaps the most basic barrier to the widespread deployment of remote manipulators is that they are very difficult to use. Remote manual operations are fatiguing and tedious, while fully autonomous systems are seldom able to function in changing and unstructured environments. An alternative approach to these extremes is to exploit computer control while leaving the operator in the loop to take advantage of the operator's perceptual and decision-making capabilities. This paper describes research that is enabling gradual introduction of computer control and decision making into operator-supervised robotic manipulation systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant progress has been made with regard to bringing vehicle intelligence (VI) technologies into passenger, commercial, and military ground vehicles. Very few of these technologies, however, directly impact vehicle control systems; and although the U.S. Department of Transportation's (DOT's) Automated Highway System (AHS) portion of its Intelligent Systems (ITS) Program, successfully demonstrated fully-automated driving in August, 1997, most of the ITS technologies developed to date have focused on driver warning/information systems. The U.S. Department of Defense's (DOD's) Army Vehicle Intelligence Program (AVIP) is capitalizing on the lessons learned from DOT's ITS Program, and will push the envelope for selected technologies, including issues of vehicle control. As VI impinges more heavily on vehicle control, it will be beneficial to consider more closely the relationship between VI and robotics. Because a significant amount of data related to the driver, vehicle, and driving environment are already captured and managed by on-board VI systems, a rich database of information is available that would be of value for automating (or roboticizing) driver/driving functions and tasks. This paper will discuss some state-of-the-art VI technologies and will suggest how greater benefits could be achieved by examining the relationship between VI and robotics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The commercial industries of agriculture, mining, construction, and material handling employ a wide variety of mobile machines, including tractors, combines, Load-Haul-Dump vehicles, trucks, paving machines, fork trucks, and many more. Automation of these vehicles promises to improve productivity, reduce operational costs, and increase safety. Since the vehicles typically operate in difficult environments, under all weather conditions, and in the presence of people and other obstacles, reliable automation faces severe technical challenges. Furthermore, the viable technology solutions are constrained by cost considerations. Fortunately, due to the limited application domain, repetitive nature, and the utility of partial automation for most tasks, robotics technologies can have a profound impact on industrial vehicles. In this paper, we describe a technical approach developed at Carnegie Mellon University for automating mobile machines in several applications, including mass excavation, mining, and agriculture. The approach is introduced via case studies, and the results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research has produced the T-series of omni- directional (ODV) robots, which are characterized by their use of smart wheel technology. In this paper we describe the design, implementation, and performance of the first use of ODV technology in a complete robotic system for a practical, real-world application. The system discussed is called ODIS, short for Omni-Directional Inspection System. ODIS is a man- portable mobile robotic system that can be used for autonomous or semi-autonomous inspection under vehicles in a parking area. The ODIS system can be deployed to travel through a parking area, systematically determining when a vehicle is in a parking stall and then carrying out a sweep under the vehicle, while sending streaming video back to a control station. ODIS uses three ODV wheels designed with a belt-driven steering mechanism to facilitate the low profile needed to fit underneath most vehicles. Its vetronics capabilities include eight different processors and a sensor array that includes a range-finding laser, sonar and IR sensors, and a color video camera. The ODIS planning and control architecture is characterized by a unique coupling between the vehicle-level path-tracking control system and a novel sensor-based feedback system for intelligent behavior generation. Real-life examples of ODIS's performance show the effectiveness of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An effective navigation planner must have knowledge not only of the effects its actions will have, but also the effect that the environment will have on its actions (e.g. the UGV may travel more slowly over rough terrain). This is needed because the shortest path to the goal is not always the most efficient when you consider the rate of travel over the terrain. To address this issue, we have developed an approach called ERA which uses regression tree induction to learn action models that predict the effect terrain conditions will have on a UGV's navigation actions. The action models support a high level mission planner that finds efficient navigation plans consisting of way-points through which the UGV should travel. We will present results from our experiments in a simulated environment and on an RWI ATRV-Jr robot. The studies evaluate the performance of ERA in different mission scenarios with different amounts of sensor and actuator noise. Advantages of our approach include the ability to automatically learn action models, generate efficient high level navigation plans taking into account terrain conditions and transfer learned knowledge to other missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of path tracking is critical to autonomous vehicle navigation. This problem has been approached using multiple feedback sensors that measure both the vehicle heading and position and also using cable guided systems measuring lateral error. This paper describes a path tracking controller that uses only the vehicle position as the single feedback value while providing a spatially similar response to disturbances independent of vehicle velocity. The results obtained on vehicles using this tracking controller are comparable to those obtained with systems measuring both vehicle position and heading. The controller has the additional benefits of being simple to implement, dealing effectively and predictably with system non-linearities, and providing a good measure of insensitivity to system variations. The controller described in this paper was developed for a two hundred horsepower farm tractor with Ackerman steering and has since been successfully used on a smaller tractor. The paper includes the derivation of the vehicle model, the development of the controller, and a discussion of system stability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe potential military robotics applications for the heavy vehicle automation and driver assistance research that has been conducted on at the California Partners for Advanced Transit and Highways (PATH). Specifically, we summarize the state of vehicle automation research at PATH by beginning with a short description of automated platoon operations with eight light duty passenger vehicles. Then we focus on automation of a Class 8 Freightliner Model FLD 125 tractor with 45-ft trailer, and lateral driver assist installed in a 10-wheel International snowplow. We also discuss full automation plans for a Kodiak 4000-ton/hour rotary snowblower, two 40-ft New Flyer buses, one 60-ft New Flyer articulated bus, and three Freightliner Century tractor-trailer combinations. We discuss benefits for civilian applications - congestion relief, driver safety, and fuel economy/emissions reductions. We then follow with a discussion of the benefits from potential military spin-ons which include, as dual-use applications, driver safety and fuel economy/emissions. We end by discussing the additional military benefit in the conduct of tactical resupply operations, where vehicles of similar weight class and performance as those experimented by PATH can be used in automated convoys with savings in manpower and survivability in addition to improved mission operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5$DEG x 3$DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sophisticated robotic platforms with diverse sensor suites are quickly replacing the eyes and ears of soldiers on the complex battlefield. The Army Research Laboratory (ARL) in Adelphi, Maryland has developed a robot-based acoustic detection system that will detect an impulsive noise event, such as a sniper's weapon firing or door slam, and activate a pan-tilt to orient a visible and infrared camera toward the detected sound. Once the cameras are cued to the target, onboard image processing can then track the target and/or transmit the imagery to a remote operator for navigation, situational awareness, and target detection. Such a vehicle can provide reconnaissance, surveillance, and target acquisition for soldiers, law enforcement, and rescue personnel, and remove these people from hazardous environments. ARL's primary robotic platforms contain 16-in. diameter, eight-element acoustic arrays. Additionally, a 9- in. array is being developed in support of DARPA's Tactical Mobile Robot program. The robots have been tested in both urban and open terrain. The current acoustic processing algorithm has been optimized to detect the muzzle blast from a sniper's weapon, and reject many interfering noise sources such as wind gusts, generators, and self-noise. However, other detection algorithms for speech and vehicle detection/tracking are being developed for implementation on this and smaller robotic platforms. The collaboration between two robots, both with known positions and orientations, can provide useful triangulation information for more precise localization of the acoustic events. These robots can be mobile sensor nodes in a larger, more expansive, sensor network that may include stationary ground sensors, UAVs, and other command and control assets. This report will document the performance of the robot's acoustic localization, describe the algorithm, and outline future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An autonomous vehicle driving in a densely vegetated environment needs to be able to discriminate between obstacles (such as rocks) and penetrable vegetation (such as tall grass). We propose a technique for terrain cover classification based on the statistical analysis of the range data produced by a single-axis laser rangefinder (ladar). We first present theoretical models for the range distribution in the presence of homogeneously distributed grass and of obstacles partially occluded by grass. We then validate our results with real-world cases, and propose a simple algorithm to robustly discriminate between vegetation and obstacles based on the local statistical analysis of the range data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small, low cost, low poer infrared imaging sensors are relatively recent innovation, employing the most advanced MEMS processing techniques, integrated circuit design, optical materials, and focal plane array packaging. We will review the rationale behind the development of low cost, small IR cameras, discuss several of the medium performance applications for these sensors via a modeling analysis, discuss the goals and status of our applied research uncooled focal plane array technology programs, and discuss the future of uncooled focal plane arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robot terrain perception is needed to provide local terrain data to assess route trafficability. Important terrain features include positive obstacles (e.g., rocks, walls and posts), negative obstacles (e.g., down-steps, gaps and holes), step slopes (lateral and longitudinal), and broken, rough or porous obstacles (e.g., brush and rubble). Terrain perception is needed to locate these terrain features and assess their geometric properties. Stereo vision (stereoscopy) is a widely-used technique for mobile robot terrain perception. Stereo vision is effective for locating positive obstacles. It has not proven effective at locating and assessing negative features, slope in regions with relatively uniform in appearance, or highly textured features. This paper explores an approach to enhance and complement stereo vision terrain perception by using simple stereo lighting (photometric stereo). Shadows created by vertical- offset lighting are an effective cue to locate negative terrain features. Horizontal-offset illumination enhances cue features in the scene for stereo vision processing. Stereo illumination creates depth cues that can not be exploited by traditional horizontal-offset stereo vision systems, but can be exploited by trinocular or vertically- offset stereo cameras. Multiple light sources and cameras enable shape-from-shading (photoclinometry) methods to overcome traditional limitations to generating 3D range and slope maps for natural terrain. Issues in applying these methods for broken, rough and porous obstacles are identified, but not examined in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vision system capable of extracting features from a semi-structured environment for vehicle guidance is described in this paper. The system is primarily used for road following via the detection of mud tracks in a tropical environment. The scene captured by a CCD colour camera is digitised into 24-bit colour images with a resolution of 320x240 pixels. Partitioning of the scene into road and non-road areas is based on the results of a colour image segmentation algorithm applied to these images. The RGB colour images from the camera are converted to HSI format. Training samples of road and non-road features of the terrain to be explored, stored in a database, are used to classify blocks of pixels using only the hue information content of the images. A Bayesian classifier in conjunction with a smooth thresholding function is used for the segmentation algorithm on a per block basis. This approach results in the recognition of traversable areas, particularly non-metalled roads. Experimental results have showed that the algorithm is invariant to shadow conditions, i.e. roads were detected under varying light conditions. Due to the soil conditions of the test sites, small puddles of water on the mud tracks are also classified as driveable areas. The system outputs a one bit 2-D map of the image every 200ms. Field results of the proposed approach have shown favourable responses for real-time implementation on an autonomous ground vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the Joint Robotics Program (JRP) is to research, develop, acquire and field unmanned ground vehicle systems for the United States Armed Forces. The program is structured to field first generation systems, mature promising technologies and then upgrade capabilities by means of an evolutionary strategy. In the near term, acquisition programs emphasize teleoperation over diverse terrain, more autonomous functioning for structured environments, and extensive opportunities for users to operate Unmanned Ground Vehicles (UGVs). With regard to far term efforts, the JRP in cooperation with the Army sponsors a tech base program (Demo III) for autonomous mobility in unstructured environments. Last October, the Demo III program held a highly successful demonstration of autonomous mobility at Fort Knox, Kentucky. Prototypical countermine systems in Bosnia and user experimentation with reconnaissance UGVs continue to engender requirements in other mission areas. The overall progress of the JFP is reflected by the fact that the Army's concept for its Future Combat Systems program involves considerable use of unmanned ground systems. The author will update the conference on the considerable progress of the JRP, as well as other Department of Defense ground robotics programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a study in late 1999, and subsequently as an official program in early 2000, the Army and DARPA agreed to jointly fund a research effort which became known as Future Combat Systems (FCS) to fast track the introduction of new capability to our ground forces which were increasingly being asked to respond rapidly to hostile situations around the world. The Chief of Staff of the Army clearly articulated this crucial need in his thrust to make the light forces more lethal and the heavy forces more deployable. The DARPA/Army agreement consisted of two parts: a) program to have industry lead teams develop concepts for how the FCS would be composed and integrated and b) a series of technology development programs which were expected to generate potentially radical improvements in the overall effectiveness of the FCS concept. Two of these technology development programs were directly related to ground vehicle robotics, and evolved by the end of 2000 to be the Unmanned Ground Combat Vehicle (UGCV) and Perception for Off-road Robotics (PerceptOR) programs. These programs and the underlying assumptions are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Tank-Automotive Research, Development, and Engineering Center (TARDEC) recently opened a 5000 square foot robotics laboratory known as the TARDEC Robotics Laboratory. The focus of the lab is on robotics research, both basic and applied, in the area of robot mobility. Mobility is the key problem for light weight robotic systems, and the TARDEC Robotics Lab will develop innovative ways to deal with the mobility issues. The lab will also test and evaluate robotic systems in all aspects of mobility and control. The lab has the highest concentration of senior researchers at TARDEC, and is committed to maintaining in- house research talent so that new combat concepts using robots can be evaluated effectively by the Army. This paper serves as an introduction to the lab, its missions, goals, capabilities and programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past year, the U.S. Army has committed to a paradigm shift in the way future ground military operations will be conducted. Increased emphasis upon the deployability of future forces has focused efforts towards reducing the weight, volume, and logistics requirements for proposed tactical systems. Extensive use of unmanned systems offer a potential means to achieve these goals, without reducing the lethality or survivability of this future ground combat force. To support this vision, the U.S. Army has embarked upon a concerted effort to develop required technology and demonstrate its maturity with the goal of incorporating this technology into the Future Combat Systems and the Objective Force.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One approach to measuring the performance of intelligent systems is to develop standardized or reproducible tests. These tests may be in a simulated environment or in a physical test course. The National Institute of Standards and Technology has developed a test course for evaluating the performance of mobile autonomous robots operating in an urban search and rescue mission. The test course is designed to simulate a collapsed building structure at various levels of fidelity. The course will be used in robotic competitions, such as the American Association for Artificial Intelligence (AAAI) Mobile Robot Competition and the RoboCup Rescue. Designed to be repeatable and highly reconfigurable, the test course challenges a robot's cognitive capabilities such as perception, knowledge representation, planning, autonomy and collaboration. The goal of the test course is to help define useful performance metrics for autonomous mobile robots which, if widely accepted, could accelerate development of advanced robotic capabilities by promoting the re-use of algorithms and system components. The course may also serve as a prototype for further development of performance testing environments which enable robot developers and purchasers to objectively evaluate robots for a particular application. In this paper we discuss performance metrics for autonomous mobile robots, the use of representative urban search and rescue scenarios as a challenge domain, and the design criteria for the test course.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the goals of the U.S. Army's Demo III robotics program is to develop individual and group behaviors that allow the robot to contribute to battlefield missions such as reconnaissance. Since experimental time on the actual robotic vehicle-referred to as the experimental unmanned ground vehicle (XUV) - is divided between many organizations, it is essential that we develop a simulation tool that will allow us to develop and test behaviors in simulation before porting them to the actual vehicle. In this work, we will describe a behavior development tool that incorporates robotic planning algorithms developed by the National Institutes of Standards and Technology (NIST) in the Modular Semi-Automated Forces (ModSAF) battlefield simulation tool. By combining the NIST planning algorithms with ModSAF, we can exercise the actual vehicle planning algorithms in a dynamic battlefield environment with a variety of entities and conditions to evaluate the behaviors we develop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an intelligent sensor, consisting in 2 CCDs with different field of view sharing the same optical motion, which can be controlled independently or not in their horizontal, vertical and rotational axis, and are connected in a closed loop to image processing resources. The goal of such a sensor is to be a testbed of image processing algorithms in real conditions. It illustrates the active perception paradigm and is used for autonomous navigation and target detection/tracking missions. Such a sensor has to meet many requirements : it is designed to be easily mounted on a standard tracked or wheeled military vehicle evolving in offroad conditions. Due to the rather wide range of missions UGVs may be involved in and to the computing cost of image processing, its computing resources have to be reprogrammable, of great power (real-time constraints), modular at the software level as well as at the hardware level and able to communicate with other systems. First, the paper details the mechanical, electronical and software design of the whole sensor. Then, we explain its functioning, the constraints due to its parallel processing architecture, the image processing algorithms that have been implemented for it and their current uses and performances. Finally, we describe experiments conducted on tracked and wheeled vehicles and conclude on the future development and use of this sensor for unmanned ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce an autonomous map-building technique for mobile robots, based on combinatorial maps. Existing representations of the environment traditionally fall into two distinct categories: metric or topological. Topological approaches are usually well-adapted to global planning and navigation tasks. However, metric maps are easier to read for a human operator and they are better suited to precise robot positioning. Among them, we can distinguish feature-based and area-based maps. Our model enables us to combine the orthogonal strengths of these various representations in a rather compact and efficient way, using an algebraic tool named combinatorial map. We propose a global framework to deal with topological and geometric uncertainties, and a whole strategy for the autonomous generation of 2D combinatorial maps of the environment. The main innovation lies in the way local free space is fused into the global model in order to correct both the position and the topology of obstacles. We extend the notion of discrete and regular occupancy grid to any kind of polygonal subdivision, with cells of variable shapes and dimensions. To conclude, we describe experiments conducted with a real-world robot moving about within a well-structured indoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we provide an overview of our on-going work using spatial relations for mobile robot navigation. Using the histogram of forces, we show how linguistic expressions can be generated to describe a qualitative view of the robot with respect to its environment. The linguistic expressions provide a symbolic link between the robot and a human user, thus facilitating two-way, human-like communication. In this paper, we present two ways in which spatial relations can be used for robot navigation. First, egocentric spatial relations provide a robot-centered view of the environment (e.g., there is an object on the left). Navigation can be described in terms of spatial relations (e.g., move forward while there is an object on the left, then turn right), such that a complete navigation task is generated as a sequence of navigation states with corresponding behaviors. Second, spatial relations can be used to analyze maps and facilitate their use in communicating navigation tasks. For example, the user can draw an approximate map on a PDA and then draw the desired robot trajectory also on the PDA, relative to the map. Spatial relations can then be used to convert the relative trajectory to a corresponding navigation behavior sequence. Examples are included using a comparable scene from both a robot environment and a PDA-sketched trajectory showing the corresponding generated linguistic spatial expressions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of NASA's goals for the Mars Rover missions of 2003 and 2005 is to have a distributed team of mission scientists. Since these scientists are not experts on rover mobility, we have developed the Rover Obstacle Visualizer and Navigability Expert (ROVANE). ROVANE is a combined obstacle detection and path planning software suite, to assist in distributed mission planning. ROVANE uses terrain data, in the form of panoramic stereo images captured by the rover, to detect obstacles in the rover's vicinity. These obstacles are combined into a traversability map which is used to provide path planning assistance for mission scientists. A corresponding visual representation is also generated, allowing human operators to easily identify hazardous regions and to understand ROVANE's path selection. Since the terrain data often contains uncertain regions, the ROVANE obstacle detector generates a probability distribution describing the likely cost of a given obstacle or region. ROVANE then allows the user to plan for best-, worst-, and intermediate-case scenarios. ROVANE thus allows non-experts to examine scenarios and plan missions which have a high probability of success. ROVANE is capable of stand-alone operation, but it is designed to work with JPL's Web Interface for Telescience, an Internet-based tool for collaborative command sequence generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Aviation and Missile Command (AMCOM), Research, Development and Engineering Center (AMRDEC), in support of the Unmanned Ground Vehicles/Systems Joint Project Office (UGV/S JPO), has developed, tested and demonstrated the feasibility of use of expendable, thin-buffered, fiber optic cable for tele-operation of unmanned ground systems. Complete Non-Line-of-Sight (NLOS), high bandwidth, expendable, fiber optic cable payout systems have been designed, leveraging other Army programs, and integrated on several ground vehicles. A number of tests have been conducted to prove the viability of fiber optics for UGV datalink applications. These successful tests led to the initiation of the development of miniature fiber optic dispensers for small UGVs. Based on the outcome of the Engineer Urban Robot Concept Experimentation Program (URBOT CEP) conducted at Ft. Leonard Wood, MO in 1999 that focused on the feasibility, capability, efficiencies and operational effectiveness of small robots for reconnaissance of bunkers, subterranean sewers and tunnels, and Military Operations in Urban Terrain (MOUT), a design concept was formulated for small robot tethered communications. Miniature fiber optic dispensers have now been fabricated and tested. This paper will present a brief history of the technology transfer and development associated with fiber optic datalinks for unmanned ground vehicles and will focus on the recent research and development of miniaturized deployment systems for small robot applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is accepted that the ability to learn and adapt is key to prosperity and survival in both individuals and societies. The same is true of populations of robots. Those robots within a population that are able to learn will outperform, survive longer and perhaps exploit their non-learning co- workers. This paper describes the ongoing results of Communal Learning in the Cognitive Colonies Project (CMU/Robotics and DRES), funded jointly by DARPA ITO- Software for Distributed Robotics and DRDC-DRES. Discussed will be how communal learning fits into the free market architecture for distributed control. Techniques for representing experiences, learned behaviors, maps and computational resources as commodities within the market economy will be presented. Once in a commodity structure, the cycle of speculate, act, receive profits or sustain losses and then learn of the market economy. This allows successful control strategies to emerge and the individuals who discovered them to become established as successful. This paper will discuss: learning to predict costs and make better deals, learning transition confidences, learning causes of death, learning with robot sacrifice and learning model parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous agent path planning is a main problem in the fields of machine learning and artificial intelligence. Reactive execution is often used in order to provide best decision for the agent's reactions. Although this problem is important in the stationary environment, most interesting environments are time varying. This paper is based on our previous work focusing on combining the potential field model with reinforcement learning to solve the stationary path problem. In this work we deal with the case of dynamic environment. In the dynamic environment, the motion of the obstacles provides for different problems and challenges, which our proposed algorithm in this paper encounters and addresses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past, work at the Universitat der Bundeswehr Muenchen (UBM) has been focused on autonomous road vehicles. During the last four years the Expectation-based Multi-focal Saccadic Vision (EMS-Vision) system has been developed and implemented. EMS-Vision is the 3rd generation dynamic vision system following the 4-D approach. The explicit representation of the own capabilities combined with a complex control and information flow allows the implementation of decision units for goal oriented activation of locomotion and perception. Due to this general approach and in contrast to former UBM systems that were specially designed and optimized for certain limited scenarios and domains, e.g., road following on Autobahnen, the EMS-Vision system can handle complex driving missions spanning multiple domains. It has been realized on a decentralized parallel hardware structure, exclusively built of commercial off-the-shelf components, in both UBM test vehicles VaMoRs and VaMP. Results from an autonomously performed mission on the UBM campus are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.