PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion)
in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the
knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the automated license plate recognition system, many reading errors are caused by inadequate character segmentation. In particular, character segmentation becomes difficult as the acquired vehicle images are seriously degraded. In this paper, we use computer vision techniques and propose a recognition-based segmentation method coupled with template matching and neural network. This will enhance the accuracy of the recognition system that aims to read automatically the German license plate. Algorithmic improvements for a projection-based segmentation are described here. In the preprocessing stage, plate tilt detection and position refinement methods are developed to prepare the data for later process. For separating touching characters, a discrimination function is presented based on a differential analysis of character contour distance. A conditionally recursive segmentation with the feedback of recognition is developed for effectively splitting touching characters and merging broken characters. We have implemented our algorithms in the intelligent camera system and obtained improvement for the recognition rate. Currently, the experiment conducted with greatly varying illumination conditions is shown at the recognition rate of an average of 92 %. Further improvement of the system is continuously undertaken for various data sets acquired under different environment conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autosense II (ASII) vehicle detection and classification sensor has been proven to be very successful using rule-based algorithms to do shape-based vehicle classification. With the current class structure, the ASII achieved 98.5% accuracy on a random 10,000 vehicle test. Using the National Academy of Sciences’ (NAS) vehicle database, which was collected under the IVHS-6 IDEA program, the ASII achieved 96.5% accuracy on a 50,000 vehicle database including a range of weather conditions and traffic conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a robust car identification system for surveillance of parking lot entrances that runs completely on a stand alone low cost intelligent camera. In order to meet accuracy and speed requirements hierarchical classifiers and coarse to fine search techniques are applied at each recognition stage from localization, segmentation to classification. The paper gives an overview of the applied image processing techniques and focuses in particular on the character classification part. Character recognition is based on a convolutional neural network that proved to generalize better than a fully connected multi-layer perceptron.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an incident management simulation model for evaluating the performance of incident management strategies currently used and considered for future use. Five incident types are considered. Their severity degrees are assumed to be related with the incident characteristics, such as number of vehicles involved, number of injuries, and fatalities, etc. The relationship between severity degree and the incident properties is derived from previous data given in Ozbay et al.The same data source is used to estimate probabilistic durations, such as incident occurrence rates, clearance times. Detection and verification time, dispatch time and clearance time can all be reduced through effective incident management strategies. The study presented in this paper focuses on the reduction of dispatch time by using various deployment strategies of ERT (Emergency Response Team). Thus, ARENA simulation development package that employs Siman simulation language is used to model and examine the effects of each incident management strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the integration of intelligent systems and the use of sensor fusion within a Multi-Level Fusion Architecture (MUFA) designed for controlling the navigation of a tele-commanded Autonomous Guided Vehicle (AGV). The AGV can move autonomously within any office environment, following instructions issued by client stations connected to the Internet and reacting accordingly to different situations found in the real world. The modules which integrate the MUFA architecture are discussed and special emphasis is given to the role played by the intelligent obstacle avoidance procedure. The AGV detailed trajectory is firstly defmed by a rule-based PFIELD algorithm from sub-goals established by a global trajectory planner. However, when an unexpected obstacle is detected by the neural network which performs the fusion of information produced by the vision system and sonar sensors, the obstacle avoidance procedure uses a special set of rules to redefine the AGV trajectory. The architecture of the neural network used for performing the sensor fusion function and the adopted set of rules are discussed. In addition, results of some simulation experiments demonstrate the ability ofthe system to define a new global trajectory when unexpected blocked regions are detected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new task has recently been initiated at the Jet Propulsion Laboratory (JPL) to design, fabricate and test an inflatable rover that can be used for various planetary applications, including operation on the Earth’s moon, on Mars, on Saturn’s moon Titan and on Jupiter’s moon, Europa. The primary application is for operation on Mars and, as such, the prototype model in development has large inflatable wheels (1.5-m diameter) that can traverse over 99 percent of the Martian surface, which is believed to be populated by rocks smaller than 0.5 meters in diameter. The 20-kg prototype requires 18 W to travel 2 km/hr on Earth, and could be capable of traveling 30 km/hr on Mars with about 100 W of power. The bench-model unit has been tested with a simple ‘joy stick’ type of radio control system as well as with a commercially available color-tracking camera system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In response to ultra-high maneuverability vehicle requirements, Utah State University (USU) has developed an autonomous vehicle with unique mobility and maneuverability capabilities. This paper describes the mechanical design of the USU T2 Omni-Directional Vehicle (ODV). The T2 vehicle is a second- generation ODV and weighs 1350 lb with six independently driven and steered wheel assemblies. The six wheel, independent steering system is capable of infinite rotation, presenting a unique solution to enhanced vehicle mobility requirements. Mechanical design of the wheel drive motors and the performance characteristics of the drive system are detailed. The steering and suspension system is discussed and the design issues associated with these systems are detailed. The vehicle system architecture, vetronics architecture, control system architecture, and kinematic-based control law development are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planning paths for onmi-directional vehicles (ODVs) can be computationally unfeasible because of the large space of possible paths. This paper presents an approach that avoids this problem through the use of abstraction in characterizing the possible maneuvers of the ODV as a grammar of parameterized mobility behaviors and describing the terrain as a covering of objectoriented functional terrain features. The terrain features contain knowledge on how best to create mobility paths—sequences of mobility behaviors—through the object and around obstacles. Given an approximate map of the environment, the approach constructs a graph of mobility paths that link the location of the vehicle with the goals. Which of these paths are actually followed by the vehicle are determined by an A* search through the graph. The effectiveness of the strategy is demonstrated
in actual tests with a real robotic vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The two most popular methods used by current researchers for tackling the problem of autonomous mobile robot map building and navigation are: (1) to use model-based methods to construct 3D models of the environment and, (2) to use view-based, or topological methods, in which the environment is represented in some non-topographical manner which the robot can typically relate directly to its sensors. Model-based methods have been shown to be good methods for local navigation, yet unreliable and computationally intensive methods for map building. View-based methods have been shown to be good for map building, however poor at local navigation.
This paper shows the results of recent research which yields a solution which utilises the benifits of each of these two methods, to overcome the inadequacies of the other. This solution results in a new approach to map building and path planning, and one which provides a robust and extensible solution. In particular the solution is largely unaffected by the size of complexity of the environmnet being learnt. It thus forms a key milestone in the evolution of MR autonomy. It is shown that an AMR can successfully use this method to navigate around an unknown environment. Learning is accomplished through segmenting the environment into discrete ‘locations’, based upon visual similarity. Navigation is achieved through storing minimal distance-to-visual-feature information at all visually distinct locations. No self-consistent geometric map is stored - yet the geometric information stored can be used for navigation between visually distinct locations and for local navigation tasks such as obstacle avoidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ideally, an algorithm used for either self localization or pose estimation would be both efficient and robust. Many researchers have based their techniques on the absolute orientation research of B. K. P Horn. As will be shown in this paper, while Horn s method performs well with an additive Gaussian noise of large variance, mismatches and outliers have a more profound effect. In this paper, the authors develop a new closed-form solution to the absolute orientation problem, featuring techniques specifically designed for increasing the robustness during the critical rotation determination stage. We also include a comparative analysis of the various strengths and weaknesses of both Horn s and the new techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common aspect of any security projection for the 21st Century is the need to function effectively in complex environments. Cluttered terrain, dense urban areas, and sub-terranean structures all present a daunting set of operational challenges for tactical units within both DoD and a host of other government agencies. The dynamic nature of urban activity and the prominence of human discord in such areas also creates a very complex and often frustrating mix of physical, emotional and psychological constraints that can prevent accomplishment of even the most simple tasks and mission sets. This paper provides an overview of DARPA’s Tactical Mobile Robotics program, and its effort to develop new technologies that will address some of the most difficult and technically challenging aspects of tactical operations in complex terrain. It stops short of an exclusive focus on urban operations in order to fully exploit the potential for semi-autonomous robotic platforms to revolutionize operations across the entire spectrum of tactical activity. This paper outlines the advantages enjoyed through the employment of Man Portable Robotic Systems (MPRS) in denied areas, and calls for a redirection of unmanned systems development toward them. A review of ground robotic platforms is provided first, followed by a discussion of operational voids and corresponding technical challenges. A very brief outline of TMR program structure is presented next in an effort to provide incremental updates on progress sought and made. The paper concludes with a call for continued support of the MRPS concept and innovative research to assist the TMR program in addressing its many technical challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a behavior-based technique for incremental on-line mapping and autonomous navigation for mobile robots,specifically geared for time-critical indoor exploration tasks. The experimental platform used is a Pioneer AT mobile robot endowed with seven sonar transducers, a drift-stabilized gyro, a compass and a pan-tilt color camera. While the thrust of our work is the autonomous generation of real-time topological maps of the environment, both metric and topological descriptions of the environment are created in real time, each preserving its unique representational power and ease-of-use. We also present initial results on multi-robot cooperative topological mapping. The building blocks of the topological map are corridors, junctions and open/closed doors, augmented with absolute heading and metric information. Since the robot does not begin with an a priori map, all environmental features have to be evaluated at run-time to ensure safe navigation and efficient exploration. Our enhanced deadreckoning algorithm is backed up by the cyclic nature of indoor environments that provides additional hints for self-localization corrections. In addition, domain knowledge (such as perpendicular hallways) is used to actively correct maps as they are built on-line. All navigation, exploration, map building and self-localization capabilities are implemented as tightly-coupled behaviors, run by the onboard CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe experiments on semi-autonomous control of a small urban robot. Three driving modes allow semiautonomous control ofthe robot through video imagery, or by using partial maps ofthe envircmmait.
Performance is analyzed in terms of maximum speed, terrain roughness, environmental conditions, and ease of control. We concentrate the discussion on a driving mode based on visual servoing. In this mode, a template designated in an image is tracked as the robot moves toward the destination designated by the operator. Particular attention is given to the robustness of the tracking with respect to template selection, computational resources, occlusions, and rough motion. The discussion of algorithm performance is based on experiments conducted at Ft. Sam Houston, TX, on Jul. 5-9 1999. In addition to the driving modes themselves, the perfonnance and practicality of an omnidirectional imaging sensor is discussed. In particular, we discuss the typical imaging artifacts due to ambiit lighting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Georgia Tech, as part of DARPA’s Tactical Mobile Robotics (TMR) Program, is developing a wide range of mission specification capabilities for the urban warfighter. These include the development of a range of easily configurable mission-specific robot behaviors suitable for various battlefield and special forces scenarios; communications planning and configuration capabilities for small teams of robots acting in a coordinated manner; interactive graphical visual programming environments for mission specification; and real-time analysis tools and methods for mission execution verification. This paper provides an overview of the approach being taken by the Georgia Tech/Honeywell team and presents a range of preliminary results for a variety of missions in both simulation and on actual robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increased use of specialized robots within heterogeneous robotic teams, as envisioned by DARPA’s Tactical Mobile Robotics program, the task of dynamically assigning work to individual robots becomes more complex and critical to mission success. The team must be able to perform all essential aspects of the mission, deal with dynamic and complex environments, and detect, identify, and compensate for failures and losses within the robotic team. Our mission analysis of targeted military missions has identified single-robot roles, and collaborative (heterobotic) roles for the TMR robots. We define a role as a set of activities or behaviors that accomplish a single, militarily relevant goal. We will present the use of these roles to: 1) identify mobility and other requirements for the individual robotic platforms; 2) rate various robots’ efficiency and efficacy for each role; and 3) identify which roles can be performed simultaneously and which cannot. We present a role-base algorithm for tasking heterogeneous robotic teams, and a mechanism for retasking the team when assets are gained or lost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order for an autonomous robot to “appropriately” navigate through a complex environment, it must have an in-depth understanding of the immediate surroundings. Appropriate navigation implies the robot will avoid collision or contact with hazards, will not be falsely rerouted around traversible terrain due to false hazard detections, and will exploit the terrain to maximize its concealment. Appropriate autonomous navigation requires the ability to detect and localize critical features in the environment. Examples of critical environmental features include rocks, trees, ditches, holes, bushes and water. Environmental features have a wide range of characteristics and multiple sensing phenomenologies are required to be able to detect them all. Once the data is acquired from these multiple phenomenologies, a mechanism is required to combine and analyze all of these disparate sources of information into one composite interpretation. In this paper we discuss the Demo III multi-sensor system for autonomous mobility, and the “operator-trained” fusion system called O-NAV (Object NAVigation) that is used to build a labeled three dimensional model of the immediate environment surrounding the robot vehicle so it can appropriately interact with its surroundings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The deployment of mobile robots in some harsh environment applications (e.g. deep mines) has the potential to offer increased safety and higher productivity. These applications often involve multiple cooperating mobile robots. One of the main challenges for the deployment of multiple mobile robots in such applications is to ensure coordination in navigation. It is essential that the mobile robots cooperate to avoid collisions and deadlocks when navigating in the work site. The lack of a mathematical formalism for modeling the coordinated navigation of multiple robots makes it difficult to verify the avoidance of collisions and deadlocks through simulation. This work proposes the use of a Petri net based discrete event formalism to model the coordinated navigation of multiple mobile robots. The detection of collisions and deadlock situations is shown by simulating the discrete event model of the coordinated navigation of multiple mobile robots. The utility of this proposed scheme has been demonstrated by controlling the navigation of three mobile platforms in a 3D graphics environment by the discrete event model of their movements for coordinated navigation in the work site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Horridge template model' is a biologically inspired motion detection model which is simple and computationally efficient. This model has been successfully implemented on several micro-sensor VLSI chips26 using greyscale pixels. The template model detects moving edges based on the difference in brightness between the edge of the moving object and the background in the spatial and temporal domains. This paper introduces an extension of the template model to incorporate both luminance and colour templates in two spatial dimensions. The luminance-chrominance colour model is used to generate both luminance and colour chrominance templates which can detect moving edges based on changes in luminance as well as colour contrast associated with a moving edge. A prototype of this extended model has been implemented from off-the-shelf components. Results from this prototype shows that luminance templates alone are unable to resolve between a group of objects moving at the same relative speed with respect to the sensor. This paper will also show that colour templates can facilitate the detection of colour boundaries. The combination of both luminance and colour templates allows more motion information to be extracted which leads to a more robust system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nap-of-the-Earth (NOE) flight and the development of advanced Unmanned Air Vehicles (UAV) require further integration of environmental sensors and onboard processing equipment. Flying insects feature a panoramic eye which relies on Optical Flow for guidance and obstacle avoidance. Furthermore, flies bear remarkable neural fusing of visual, inertial, and aerodynamic senses. We describe the development of a miniature electrically-powered UAV featuring a neuromorphic electronic eye based on biological motion detection. The system is tethered within the laboratory on a custom-built whirling-arm test bed. The 20-photoreceptor onboard eye signals are processed by 19 custom Elementary Motion Detection (EMD) circuits which are derived from those of the fly. Visual, inertial, and tachymeter signals from the aircraft are scanned by a PC data acquisition board. Flight commands are output via the parallel port to a microcontroller interfacing with a standard radio-control model transmitter. Vision-based flight path trajectories and landings are simulated. This UAV project is at the intersection of neurobiology, robotics, and aerospace. Its purpose is to test biologically-inspired sensory-motor concepts within a challenging environment and to provide principles and technology to assist Micro Air Vehicles (MAV) indoors operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel collision avoidance sensor with possible application in satellite navigation is presented. Passive radiometric detection of colliding objects is used, offering the advantage that it does not interfere with satellite communication or guidance systems. Operation in the millimetre-wave band allows the possibility of full scale integration of front-end detection circuitry with the back-end signal processing. And the use of insect vision models, in the processing, leads to reduced circuit complexity. Such a compact sensing system could be ideal for integration into the structure of nanosatellites1 - these are very small satellites weighing less than 10 kg. These next generation satellites will operate in clusters, thus detection and avoidance of neighbouring satellites is vital to the success of such configurations. This paper discusses the design and structure of our mm-wave collision avoidance sensor2 and predicts the performance3 for the orbital environment. The effects of strong radiation sources and the dynamics of satellite heating and motion are explored. Methods and techniques for obtaining this information are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One approach to the design of intelligent autonomous robots is through evolutionary computation. In this approach, the robot’s behavior is evolved through a process of simulated evolution, applying the Darwinian principles of survival-of-the-fittest and inheritance-with-variation to the development of the robot’s control programs. In previous studies, we illustrated this approach on problems of learning individual behaviors for autonomous mobile robots. Our previous work has focused on tasks which were reasonably complex, but which required only a single behavior. In order to scale this approach to more realistic scenarios, we now consider methods for evolving complex sets of tasks. Our approach has been to extend the basic evolutionary learning method to encompasses co-evolution, that is, the simultaneously evolution of multiple behaviors. This paper addresses alternative designs within this basic paradigm. Specifically, we focus on dependencies among the learning agents, that is, what a given learning agent needs to know about other agents in the system. By using domain knowledge, it is possible to reduce or eliminate interactions among the agents, thereby reducing the effort required to co-evolve these agents as well as reducing the impediments to learning caused by these interactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iterative learning control (ILC) is a technique for using repetitive operation to derive the input commands needed to force a dynamical system to follow a prescribed trajectory. In this paper we describe ideas towards the use of ILC for path-tracking control of a mobile robot. The work is focused on a novel robotic platform, the Utah State University (USU) Omni-Directional Vehicle (ODV), which features six “smart wheels,” each of which has independent control of both speed and direction. Using a validated dynamic model of the ODV robot, it is shown that ILC can be used to learn the nominal input commands needed force the robot to track a prescribed path in inertial space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement. By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations and experiments on a mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive neural network controller has been developed for a model of an underwater vehicle. This controller combines radial basis neural network and sliding mode control techniques. No prior off-line training phase is required and this scheme exploits the advantages of both neural network control and sliding mode control. An on-line stable adaptive law is derived using Lyapunov theory. It is observed that the number of neurons and the width of Gaussian function should be chosen carefully. Performance of the controller is demonstrated by computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
1. Methods and means for observing and 3D imaging of underwater static and dynamic objects (e.g., mines and living beings). The methods are based on optical refraction, diffraction, and the Talbot effects on the water surface disturbed by ultrasound waves reflected or passed by the object .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unstructured environments pose significant challenges to the operation and control of mobile robots. Thorough task planning is often not possible due to the unpredictable nature of the workspace environment. For example, in underground mining environments, the operator generally does not have accurate global knowledge of the environment, making it difficult or impossible to provide the mobile robot with sufficient information to navigate successfully to a desired destination. The unpredictability of the environment makes realtime sensing and decision making an essential component of the mobile robot. The formalism of discrete event systems is well suited for modeling the internal and external asynchronous events associated with the navigation of a mobile robot. This work proposes the use of a hierarchical Petri net-based discrete event technique to model the sensing and control systems of mobile robots in unstructured environments. The discrete event model can be monitored to provide the operator with realtime progress information. The paper describes a simulated mobile robot in a 3D graphics environment which is controlled by a discrete event model in order to demonstrating the modeling of navigation in unstructured environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.