PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Unmanned ground vehicle (UGV) technology can be used in a number of ways to assist in counter-terrorism activities. In addition to the conventional uses of tele-operated robots for unexploded ordinance handling and disposal, water cannons and other crowd control devices, robots can also be employed for a host of terrorism deterrence and detection applications. In previous research USU developed a completely autonomous prototype robot for performing under- vehicle inspections in parking areas (ODIS). Testing of this prototype and discussions with the user community indicated that neither the technology nor the users are ready for complete autonomy. In this paper we present a robotic system based on ODIS that balances the users' desire/need for tele- operation with a limited level of autonomy that enhances the performance of the robot. The system can be used by both civilian law enforcement and military police to replace the traditional mirror on a stick system of looking under cars for bombs and contraband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the wake of the World Trade Center tragedy on Sept. 11th 2001, robots developed for the Defense Advanced Research Projects Agency's Tactical Mobile Robot program were used under the direction of CRASAR, the Center for Robot-Assisted Search and Rescue, to provide technical support to the relief effort. The TMR's (Tactical Mobile Robots) were used to search the disaster scene for casualties, locate victims, and assess building integrity. During the effort the Tactical Mobile Robots were presented with unprecedented obstacles and challenges. This paper serves to outline lessons learned at the WTC (World Trade Center) disaster and provide information for the development of more capable search and rescue robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel teleparamedic robot concept, based on high practicality and economy has been presented. This new UGV (Unmanned Ground Vehicle) has haptic feedback-based driving and teleparamedic robotic operation, based on true 3-D visualization. The robotic operations include: soldier evacuation and two basic FAM (First Aid Measure) modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small air and ground physical agents (robots) will be ubiquitous on the battlefield of the 21st century, principally to lower the exposure to harm of our ground forces in urban and open terrain. Teams of small collaborating physical agents conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA), intelligence, chemical and biological agent detection, logistics, decoy, sentry, and communications relay will have advanced sensors, communications, and mobility characteristics. The Army Research Laboratory (ARL) is conducting research in sensor fusion, communications, and processing on small mobile robotic platforms principally in the urban environment in support of Military Operations in Urban Terrain (MOUT). This paper discusses on-going research at ARL that supports the development of multi-robot collaboration. Commercial ATRV-2 and Urban Robot platforms are being utilized along with advanced battlefield visualization tools and other tools to effectively command and control teams of collaborating physical agents and present the gathered information in a manner that is useful to the commander. The software architecture and the modular packaging designs will be the focus of the paper, which also consider mother ship concepts. Additionally, work that has been conducted with PM Soldier Systems to integrate robotic platforms (Robot Warrior) with the Land Warrior (LW) ensemble to create a Scout Warrior will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have successfully demonstrated a simple, wide field-of- view, foveated imaging system utilizing a liquid crystal spatial light modulator (SLM). The SLM was used to correct the off-axis aberrations that otherwise limited the useful field-of-view (FOV) of our system. Our system mimics the operation of the human eye by creating an image with variable spatial resolution and could be made significantly smaller and more compact than a conventional wide FOV system. It may be useful in applications such as surveillance, remote navigation of unmanned vehicles, and target acquisition and tracking, or any application where size, weight, or data transmission bandwidth is critical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of the European Mars Rover development, prototype vehicles for wheeled, tracked and tumbling locomotion have been implemented and tested. As a spin-off the versatile tracked and wheeled MERLIN (Mobile Experimental Rover for Locomotion and Intelligent Navigation) rovers for outdoor applications resulted. In this paper the navigation sensors and the control system are addressed, supporting the locomotion devices to deal with rough terrain, including autonomous obstacle detection and avoidance strategies. In addition an advanced telematics infrastructure for remote sensor data acquisition and tele-operations has been realized for these vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maintaining a solid radio communication link between a mobile robot entering a building and an external base station is a well-recognized problem. Modern digital radios, while affording high bandwidth and Internet-protocol-based automatic routing capabilities, tend to operate on line-of-sight links. The communication link degrades quickly as a robot penetrates deeper into the interior of a building. This project investigates the use of mobile autonomous communication relay nodes to extend the effective range of a mobile robot exploring a complex interior environment. Each relay node is a small mobile slave robot equipped with sonar, ladar, and 802.11b radio repeater. For demonstration purposes, four Pioneer 2-DX robots are used as autonomous mobile relays, with SSC-San Diego's ROBART III acting as the lead robot. The relay robots follow the lead robot into a building and are automatically deployed at various locations to maintain a networked communication link back to the remote operator. With their on-board external sensors, they also act as rearguards to secure areas already explored by the lead robot. As the lead robot advances and RF shortcuts are detected, relay nodes that become unnecessary will be reclaimed and reused, all transparent to the operator. This project takes advantage of recent research results from several DARPA-funded tasks at various institutions in the areas of robotic simulation, ad hoc wireless networking, route planning, and navigation. This paper describes the progress of the first six months of the project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While land vehicles in open terrains is currently the primary military operation, it is expected that an increasing number of conflicts will occur in urban setting. Urban robots must operate under mobility, communication, perception and control conditions far more demanding than their open terrain counterparts. The Defense Research Establishment Suffield (DRES) is being tasked to develop robots, unmanned vehicles and supports system to aid the Canadian Forces in urban operations. In preparation for this role DRES personnel were invited to participate in operation Urban Ram, a large urban war game held on the grounds of CFB Griesbach in Edmonton. This paper presents the lessons learned at Urban Ram as to what roles robots could fulfill and the challenges of urban environments that must be overcome. Also presented will be robotic concepts inspired by Urban Ram, specifically discussed will be High Utility Robotics (HUR), which combines geometric shape shifting with function morphing to provide the general purpose, high mobility and broad application robots required for urban environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid platform for an Unmanned Ground Vehicle (UGV), one with legs and wheels, was initially considered to yield a design that possessed a high degree of intrinsic mobility. Integrating a high level of mobility reduces the UGV's perception and computational requirements for successful semi-autonomous or autonomous terrain negotiation. An investigation into the dynamic capabilities of the hybrid design revealed a large amount of otherwise impossible behaviors. The widened scope of maneuvers enabled the simulated robot to negotiate higher obstacles, clear larger ditches and generally improved its rough terrain mobility. A scalability study was also undertaken to predict dynamic potential of various platform sizes and to aid in the selection of design specifications such as motor torque-speed curves. The hybrid design of the platform (legs with active wheels) proved invaluable in achieving these dynamic behaviors and revealed that the leg-wheel design was as fundamental to dynamic capabilities, as it was to intrinsic mobility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hydraulic steering control system is widely used on varieties of vehicles. This paper presents the application of a proportional-integral-derivative (PID) controller with a non-linear compensation algorithm to control an electrohydraulic (E/H) steering system on an agricultural vehicle. In this controller, the non-linear compensation algorithm was used to compensate for the system deadband and saturation of the electrohydraulic steering system and the PID loop was used to minimize the tracking error in steering control. The controller was tested on a Deere 8200 tractor driving on both cement pavement and farm fields. The test results indicate that the nonlinearity compensated PID controller performed well on agricultural vehicle steering control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interest by the Armed Forces in Unmanned Ground Vehicles (UGVs) is at its highest point ever. The concepts, prototypes, and user appraisals that the Joint Robotics Program (JRP) has generated over the last decade are finally coming to fruition as all four Services are including UGVs in their transformation planning. The mission of the congressionally-directed JRP is to research, develop, acquire and field UGV systems for the United States Armed Forces. The program is structured to field first generation systems, mature promising technologies and then upgrade capabilities by means of an evolutionary strategy. In the near term, acquisition programs emphasize teleoperation over diverse terrain, more autonomous functioning for structured environments, and extensive opportunities for users to operate UGVs. In the far term, the JRP and the Army jointly sponsor a tech base effort (Demo III and Intelligent Mobility programs) for autonomous mobility in unstructured environments. Last October, the Demo III program held a highly successful demonstration of autonomous mobility at Fort Indiantown Gap, PA. This demonstration featured several Experimental Unmanned Vehicles (XUVs) operating semi-autonomously and cooperatively over challenging and diverse terrain. Additionally, the JRP has initiated a Joint Architecture for Unmanned Ground Systems (JAUGS) to enable component interoperability and technology insertion. The author will update the conference on recent activities by the JRP, as well as other related Department of Defense ground robotics programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Defense Advanced Research Projects Agency (DARPA) and the US Army (ASAALT) have jointly funded several FCS research initiatives in ground robotics. The Unmanned Ground Combat Vehicle (UGCV) and Perception for Off-Road Mobility (PerceptOR) programs are the major elements of this joint ground robotic effort. These programs were initiated in fiscal year 2001 and have progressed through their first phase. The UGCV program, now in Phase IB, has downselected from 11 concepts designs to 4. Phase IB focuses on detailed design of teams' concepts in anticipation of the prototype construction Phase II and initial vehicle roll-out near the end of the 2002 calendar year. This paper highlights program findings to date as a result of the initial phase, and illustrates plans for Phase II prototype testing. The PerceptOR program, currently in Phase II, has completed its Phase I which involved development of a perception system for operation on a commercial All Terrain Vehicle. This paper describes the effort of the first phase, and outlines the plans for vehicle testing in Phases II and III.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotics is one of the fundamental enabling technologies required to meet the U. S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The Future Combat Systems (FCS) program is poised to be the first component of this force to utilize robotic or unmanned systems. The U. S. Army Tank Automotive Research, Development & Engineering Center (TARDEC), in partnership with the U. S. Army Research Laboratory (ARL) has initiated an effort to develop a near-term robotics capability for FCS entitled the Robotic Follower Advanced Technology Demonstration program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army has committed to a paradigm shift in the way future ground military operations will be conducted. It envisions highly mobile, lethal, and survivable forces that seamlessly combine manned and unmanned elements. To support this vision, the U.S. Army Research Laboratory, together with an alliance of government, industrial and academic organizations, has embarked upon a concerted research program focusing upon development of the technologies required for autonomous ground mobility by unmanned systems. This paper will discuss technical activities of the past year and research directions for the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Unmanned Ground Vehicles/ Systems Joint Project Office (UGV/S JPO) is developing and fielding a variety of tactical robotic systems for the Army and Marine Corps. The Standardized Robotic System (SRS) provides a family of common components that can be installed in existing military vehicles, to allow unmanned operation of the vehicle and its payloads. The Robotic Combat Support System (RCSS) will be a medium sized unmanned system with interchangeable attachments, allowing a remote operator to perform a variety of engineering tasks. The Gladiator Program is a USMC initiative for a small to medium sized, highly mobile UGV to conduct scout/ surveillance missions and to carry various lethal and non-lethal payloads. Acquisition plans for these programs require preplanned evolutionary block upgrades to add operational capability, as new technology becomes available. This paper discusses technical and performance issues that must be resolved and the enabling technologies needed for near term block upgrades of these first generation robotic systems. Additionally, two Joint Robotics Program (JRP) initiatives, Robotic Acquisition through Virtual Environments and Networked Simulations (RAVENS) and Joint Architecture for Unmanned Ground Systems (JAUGS), will be discussed. RAVENS and JAUGS will be used to efficiently evaluate and integrate new technologies to be incorporated in system upgrades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes our experience implementing navigation behavior for two different autonomous multi-robot systems using two very different approaches. We describe the problems encountered and their solutions and the extensions necessary to support planning for multiple robots in our application domains. We conclude that there are many applications of path-planning that would be well served by the introduction of a little domain specific intelligent behavior as a substitute for brute force path planning over unnecessarily large configuration spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned ground vehicles (UGV), traversing open terrain, require the capability of identifying non-geometric barriers or impediments to navigation, such as soft soil, fine sand, mud, snow, compliant vegetation, washboard, and ruts. Given the ever changing nature of these terrain characteristics, for an UVG to be able to consistently navigate such barriers, it must have the ability to learn from and to adapt to changes in these environmental conditions. As part of ongoing research co-operation with the Defense Research Establishment Suffield (DRES), Scientific Instrumentation Ltd. (SIL) has developed a Terrain Simulator that allows for the investigation of terrain perception and of learning techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a methodology to assess the results of image processing algorithms for unstructured road edges detection. We aim at performing a quantitative, comparative and repetitive evaluation of numerous algorithms in order to direct our future developments in navigation algorithms for military unmanned vehicles. The main scope of this paper is the constitution of this database and the definition of the assessment metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for driving in urban areas. We need to sense cars and pedestrians and curbs and fire plugs and bicycles and lamp posts; we need to predict the paths of our own vehicle and of other moving objects; and we need to decide when to issue alerts or warnings to both the driver of our own vehicle and (potentially) to nearby pedestrians. No single sensor is currently able to detect and track all relevant objects. We are working with radar, ladar, stereo vision, and a novel light-stripe range sensor. We have installed a subset of these sensors on a city bus, driving through the streets of Pittsburgh on its normal runs. We are using different kinds of data fusion for different subsets of sensors, plus a coordinating framework for mapping objects at an abstract level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most robotic platforms have, up to this point, been designed with emphasis placed on improving mobility technologies. Minimal emphasis has been placed on payloads and mission execution. Using a top-down approach, Mesa Associates, Inc. identified specific UGV mission applications and structured its MATILDA platform using these applications for vehicle mobility and motion control requirements. Specific applications identified for the MATILDA platform include: Target surveillance, explosive device neutralization, material pickup and transport, weapon transport and firing, and law enforcement. Current performance results, lessons-learned, technical hurdles, and future applications are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous systems have not become mainstream because of the shortcomings in the sensing of the environment and the required intelligence to react to it appropriately. Another hindrance to its maturity is the lack of a standard. Great innovations have been made in isolated applications for both sensing and intelligence, but it has been difficult to leverage them in other systems. Autonomous Solutions Inc. has built a development platform with the JPO JAUGS standard enabling rapid development of compliant hardware and software modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the methodology, tools and technologies for designing and implementing communication and control systems for networked automated or driver assist vehicles. In addressing design, we discuss enabling methodologies and our suite of enabling computational tools for formal modeling, simulation, and implementation. We illustrate our description with design, development and implementation work we have performed for Automated Highway Systems, Autonomous Underwater Vehicles, Mobile Offshore Base, Unmanned Air Vehicles, and Cooperative Adaptive Cruise Control. We conclude with the assertion - borne from our experience - that ground vehicle systems with any degree of automated operation could benefit from the type of integrated development process that we describe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A FOG-aided GPS fusion system was developed for positioning an off-road vehicle, which consists of a six-axis inertial measurement unit (IMU) and a Garmin global positioning system (GPS). An observation-based Kalman filter was designed to integrate the readings from both sensors so that the noise in GPS signal was smoothed out, the redundant information was fused and a high update rate of output signals was obtained. The drift error of FOG was also compensated. By using this system, a low cost GPS can be used to replace expensive GPS with a higher accuracy. Measurement and fusion results showed that the positioning error of the vehicle estimated using this fusion system was greatly reduced from a GPS-only system. At a vehicle speed of about 1.34 m/s, the mean bias in East axis of the fusion system was 0.48 m comparing to the GPS mean bias of 1.28 m, and the mean bias in North axis was reduced to 0.32 m from 1.48 m. The update frequency of the fusion system was increased to 9 Hz from 1 Hz of the GPS. A prototype system was installed on a sprayer for vehicle positioning measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many mobile robots use Polaroid ultrasonic sensors for obstacle avoidance. This paper describes the experimental characterization of these sensors using a unique, fully automated testbed system. Using this testbed, we gathered large data sets of 5,000-16,000 data points in every experiment for characterization purposes; in a repeatable fashion and without human supervision. In the experimental characterization reported in this paper we focused on a comparison of the beamwidth of a single sonar with that of a dual sonar phased array. For the single sonar we found that flat walls trigger echo signals up to an angle of +/- 42 degree(s), which is well beyond the traditional assumed beamwidth of +/- 15 degree(s). We determined that these echoes result from the secondary and tertiary lobe of the well known multi-lobed propagation patterns of Polaroid ultrasonic sensors. In contrast, with the dual sonar phased array echo signals were triggered only up to beamwidths of 4-6 degree(s). The results in this paper were obtained for two test targets: a specular surface and a cylindrical object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-speed unmanned ground vehicles have important potential applications, including reconnaissance, material transport, and planetary exploration. During high-speed operation, it is important for a vehicle to sense changing terrain conditions, and modify its control strategies to ensure aggressive, yet safe, operation. In this paper, a framework for terrain characterization and identification is briefly described, composed of 1) vision-based classification of upcoming terrain, 2) terrain parameter identification via wheel-terrain interaction analysis, and 3) terrain classification based on auditory wheel-terrain contact signatures. The parameter identification algorithm is presented in detail. The algorithm derives from simplified forms of classical terramechanics equations. An on-line estimator is developed to allow rapid identification of critical terrain parameters. Simulation and experimental results show that the terrain estimation algorithm can accurately and efficiently identify key terrain parameters for sand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach to estimating in real-time the degree to which an articulated robotic vehicle is undergoing wheel slip and/or sinkage in soft terrain. Robotic vehicles generally have hazard avoidance sensors which measure the shape of the sensible surface, and these can be used to predict what the articulation pose of the vehicle will be as it moves over the surface. An articulated vehicle (one with three or more wheels on each side) can directly measure the shape of the loadbearing surface by combining inclination and articulation sensing. Delays between the actual articulations and the expectations can be explained by wheel slippage. Differences between the expectation and the actual articulations can be explained by sinkage below the sensed surface. If one assumes that successive wheels on each side follow the same profile as the front wheel (sinking the same amount, if any, into the soil), then it is possible to estimate sinkage and slippage separately. A Maximum-A-Posteriori estimation procedure formalizing this heuristic approach is developed and simulated, and the results presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates kinetic behavior of a planetary rover with attention to tire-soil traction mechanics and articulated body dynamics, and thereby study the control when the rover travels over natural rough terrain. Experiments are carried out with a rover test bed to observe the physical phenomena of soils and to model the traction mechanics, using the tire slip ratio as a state variable. The relationship of load-traction factor versus the slip ratio is modeled theoretically then verified by experiments, as well as specific parameters to characterize the soil are identified. A dynamic simulation model is developed considering the characteristics of wheel actuators, the mechanics of tire-soil traction, and the articulated body dynamics of a suspension mechanism. Simulations are carried out to be compared with the corresponding experimental data and verified to represent the physical behavior of a rover. Finally, a control method is proposed and tested. The proposed method keeps the slip ratio within a small value and limits excessive tire force, so that the rover can successfully traverse over the obstacle without digging the soil or being stuck.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proprioception is a sense of body position and movement that supports the control of many automatic motor functions such as posture and locomotion. This concept, normally relegated to the fields of neural physiology and kinesiology, is being utilized in the field of unmanned mobile robotics. This paper looks at developing proprioceptive behaviors for use in controlling an unmanned ground vehicle. First, we will discuss the field of behavioral control of mobile robots. Next, a discussion of proprioception and the development of proprioceptive sensors will be presented. We will then focus on the development of a unique neural-fuzzy architecture that will be used to incorporate the control behaviors coming directly from the proprioceptive sensors. Finally we will present a simulation experiment where a simple multi-sensor robot, utilizing both external and proprioceptive sensors, is presented with the task of navigating an unknown terrain to a known target position. Results of the mobile robot utilizing this unique fusion methodology will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the optimization of locomotion performances of vehicle used for planetary exploration. The design of an innovative reconfigurable mini-rover is presented. Then, a control process that optimize the stability and the global traction performances is developed. A method to identify in-situ the wheel-ground mechanical contact properties is proposed and used to determine an optimal traction torque. Results on experiments and simulations show that the rover stability is significantly enhanced by using the proposed control method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
4D/RCS consists of a multi-layered multi-resolutional hierarchy of computational nodes each containing elements of sensory processing (SP), world modeling (WM), value judgment (VJ), and behavior generation (BG). At the lower levels, these elements generate goal-seeking reactive behavior. At higher levels, they enable goal-defining deliberative behavior. At low levels, range in space and time is short and resolution is high. At high levels, distance and time are long and resolution is low. This enables high-precision fast-action response over short intervals of time and space at low levels, while long-range plans and abstract concepts are being formulated over broad regions of time and space at high levels. 4D/RCS closes feedback loops at every level. SP processes focus attention (i.e., window regions of space or time), group (i.e., segment regions into entities), compute entity attributes, estimate entity state, and assign entities to classes at every level. WM processes maintain a rich and dynamic database of knowledge about the world in the form of images, maps, entities, events, and relationships at every level. Other WM processes use that knowledge to generate estimates and predictions that support perception, reasoning, and planning at every level. 4D/RCS was developed for the Army Research Laboratory Demo III program. To date, only the lower levels of the 4D/RCS architecture have been fully implemented, but the results have been extremely positive. It seems clear that the theoretical basis of 4D/RCS is sound and the architecture is capable of being extended to support much higher levels of performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of the Army's Demo III project, a sensor-based system has been developed to identify roads and to enable a mobile robot to drive along them. A ladar sensor, which produces range images, and a color camera are used in conjunction to locate the road surface and its boundaries. Sensing is used to constantly update an internal world model of the road surface. The world model is used to predict the future position of the road and to focus the attention of the sensors on the relevant regions in their respective images. The world model also determines the most suitable algorithm for locating and tracking road features in the images based on the current task and sensing information. The planner uses information from the world model to determine the best path for the vehicle along the road. Several different algorithms have been developed and tested on a diverse set of road sequences. The road types include some paved roads with lanes, but most of the sequences are of unpaved roads, including dirt and gravel roads. The algorithms compute various features of the road images including smoothness in the world model map and in the range domain, and color features and texture in the color domain. Performance in road detection and tracking are described and examples are shown of the system in action.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method for communicating commands in hierarchical planning control is presented. This new method can be used to guarantee that plans created will be optimal within and across the planning graphs throughout the hierarchy. Boundary conditions are specified so that optimality is guaranteed. Specific hierarchical planning examples are given for a group of off-road autonomous ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we will describe a value-driven graph search technique that is capable of generating a rich variety of single and multiple vehicle behaviors. The generation of behaviors depends on cost and benefit computations that may involve terrain characteristics, line of sight to enemy positions, and cost, benefit, and risk of traveling on roads. Depending on mission priorities and cost values, real-time planners can autonomously build appropriate behaviors on the fly that include road following, cross-country movement, stealthily movement, formation keeping, and bounding overwatch. This system follows NIST's 4D/RCS architecture, and a discussion of the world model, value judgment, and behavior generation components is provided. In addition, techniques for collapsing a multidimensional model space into a cost space and planning graph constraints are discussed. The work described in this paper has been performed under the Army Research Laboratory's Robotics Demo III program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a world model that combines a variety of sensed inputs and a priori information and is used to generate on-road and off-road autonomous driving behaviors. The system is designed in accordance with the principles of the 4D/RCS architecture. The world model is hierarchical, with the resolution and scope at each level designed to minimize computational resource requirements and to support planning functions for that level of the control hierarchy. The sensory processing system that populates the world model fuses inputs from multiple sensors and extracts feature information, such as terrain elevation, cover, road edges, and obstacles. Feature information from digital maps, such as road networks, elevation, and hydrology, is also incorporated into this rich world model. The various features are maintained in different layers that are registered together to provide maximum flexibility in generation of vehicle plans depending on mission requirements. The paper includes discussion of how the maps are built and how the objects and features of the world are represented. Functions for maintaining the world model are discussed. The world model described herein is being developed for the Army Research Laboratory's Demo III Autonomous Scout Vehicle experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In support of the Army vision for increased mobility, survivability, and lethality, we are investigating the use of ultra-wideband (UWB) radar technology to enhance unmanned ground vehicle missions. The ability of UWB radar technology to detect foliage-concealed objects could provide an important obstacle avoidance capability for robotic vehicles, which would improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U.S. forces. This technology would address the particular challenges that confront robotic vehicles such as large rocks hidden in tall grass and voids such as ditches and bodies of water. This paper describes electromagnetic model predictions of the radar cross section of potential robotic vehicle obstacles. These model predictions will be used to guide the data collection scenarios for the Army Research Laboratory (ARL) ultra-wideband BoomSAR system. Using a combination of the models, simulations, and BoomSAR data we investigate the operating parameters (imaging angles, frequencies, bandwidth, etc.) for an obstacle-avoidance UWB radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.