The unmanned ground compat vehicle (UGCV) design evolved by the SAIC team on the DARPA UGCV Program is summarized in this paper. This UGCV design provides exceptional performance against all of the program metrics and incorporates key attributes essential for high performance robotic combat vehicles. This performance includes protection against 7.62 mm threats, C130 and CH47 transportability, and the ability to accept several relevant weapons payloads, as well as advanced sensors and perception algorithms evolving from the PerceptOR program. The UGCV design incorporates a combination of technologies and design features, carefully selected through detailed trade studies, which provide optimum performance against mobility, payload, and endurance goals without sacrificing transportability, survivability, or life cycle cost. The design was optimized to maximize performance against all Category I metrics. In each case, the performance of this design was validated with detailed simulations, indicating that the vehicle exceeded the Category I metrics. Mobility metrics were analyzed using high fidelity VisualNastran vehicle models, which incorporate the suspension control algorithms and controller cycle times. DADS/Easy 5 3-D models and ADAMS simulations were also used to validate vehicle dynamics and control algorithms during obstacle negotiation.
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
In order for an autonomous robot to “appropriately” navigate through a complex environment, it must have an in-depth understanding of the immediate surroundings. Appropriate navigation implies the robot will avoid collision or contact with hazards, will not be falsely rerouted around traversible terrain due to false hazard detections, and will exploit the terrain to maximize its concealment. Appropriate autonomous navigation requires the ability to detect and localize critical features in the environment. Examples of critical environmental features include rocks, trees, ditches, holes, bushes and water. Environmental features have a wide range of characteristics and multiple sensing phenomenologies are required to be able to detect them all. Once the data is acquired from these multiple phenomenologies, a mechanism is required to combine and analyze all of these disparate sources of information into one composite interpretation. In this paper we discuss the Demo III multi-sensor system for autonomous mobility, and the “operator-trained” fusion system called O-NAV (Object NAVigation) that is used to build a labeled three dimensional model of the immediate environment surrounding the robot vehicle so it can appropriately interact with its surroundings.
One of the principal roles of the Demo III Experimental Unmanned Ground Vehicle will be as a forward scout performing Reconnaissance, Surveillance, and Target Acquisition (RSTA) operations. This paper will present the elements of the preliminary design process for satisfying the rigorous Demo III ATR requirements, including military vehicle deductibility at a maximum range of 6 km and dismounted soldier detection at 2km. The constituent design issues include sensor selection, sensor suite mounting and stabilization, processing architecture, and algorithm selection. In the context of this selection and design process the lessons learned from previous Unmanned Ground Vehicle RSTA efforts will be introduced and the contractual and subsystem derived requirements will be presented as well as the interface issues for the RSTA subsystem in conjunction with the navigation, mission execution, and communication subsystems.
This paper will summarize the Autonomous Mobility system for the Demo III program. The autonomous mobility system involves issues in algorithms, sensors, and processing architectures. We will describe some history, and general philosophies that guided us in the direction of the design described in this paper.
This paper will provide a summary of the methodology, metrics, analysis, and trade study efforts for the preliminary design o the Vetronics Processing Architecture (PA) system based on the Demo III Experimental Unmanned Ground Vehicle (XUV) program requirements. We will document and describe both the provided and analytically derived system requirements expressed by the proposal. Our experience based on previous mobility and Reconnaissance, Surveillance, Targeting, Acquisition systems designed and implemented for Demo II Semi-Autonomous Surrogate Vehicle and Mobile Detection, Assessment and Response System will be used to describe lessons learned as applied to the XUV in PA architecture, Single Board Computers, Card Cage Buses, Real-Time and Non Real-Time processor and Card Cage to Card Cage Communications, and Imaging and Radar pre-processors selection and choices. We have selected an initial architecture methodology.
A goal of the surrogate semi-autonomous vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. This paper describes the process and tools used in porting UGV/SSV (unmanned ground vehicle) autonomous mobility and target recognition algorithms from a SISD (single instruction single data) processor architecture (i.e., a Sun SPARC workstation running C/UNIX) to a MIMD (multiple instruction multiple data) parallel processor architecture (i.e., PARAGON-a parallel set of i860 processors running C/UNIX). It discusses the gains in performance and the pitfalls of such a venture. It also examines the merits of this processor architecture (based on this conceptual prototyping effort) and programming paradigm to meet the final SSV demonstration requirements.
This paper presents an analysis of stopping distances for an unmanned ground vehicle achievable with selected ladar and stereo video sensors. Based on a stop-to-avoid response to detected obstacles, current passive stereo technology and existing ladars provide equivalent safe driving speeds. Only a proposed high-resolution ladar can detect small (8 inch) obstacles far enough ahead to allow driving speeds in excess of 10 miles per hour. The stopping distance analysis relates safe vehicle velocity to obstacle and sensor pixel sizes.
A goal of the Surrogate Semi-Autonomous Vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. In this paper we address the steps to develop the global navigation system (GNS) for the SSV. We also discuss GNS components, specifications, and requirements for meeting SSV system needs. The development process, results, lessons learned, and remaining issues are discussed. We selected a low-cost solution because no available integrated global positioning system/inertial navigation system (GPS/INS) existed at the time of selection.
A goal of the Surrogate Semi-Autonomous Vehicle (SSV) program is to have multiple vehicles navigate autonomously as well as cooperatively with other vehicles. In this paper, we address the steps to develop the vehicle hardware. The vehicle selected is the military High Mobility Multipurpose Wheeled Vehicle (HMMWV) and the image bus selected can handle greater than 30 MB per second of image data. The laser scanner we will be using will have a look-ahead distance of greater than 30 m and we have started work on the vibration isolation system and ways to correct image sensor position instability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.