PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
While biological principles have inspired researchers in computational and engineering research for a long time, there is still rather limited knowledge flow back from computational to biological domains. This paper presents examples of our work where research on anthropomorphic robots lead us to new insights into explaining biological movement phenomena, starting from behavioral studies up to brain imaging studies. Our research over the past years has focused on principles of trajectory formation with nonlinear dynamical systems, on learning internal models for nonlinear control, and on advanced topics like imitation learning. The formal and empirical analyses of the kinematics and dynamics of movements systems and the tasks that they need to perform lead us to suggest principles of motor control that later on we found surprisingly related to human behavior and even brain activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the problem of evolvability in robotic and evolutionary computation systems against the background of biological and more general types of evolution. Open-ended evolution is rigorously defined in terms of unbounded complexity growth and posed as a challenge problem. The first solutions (due to the author) to the problem of open- ended evolution are contained in this paper. These solutions seem unsatisfying but nevertheless are mathematically correct. An implemented solution in software and outlines of solutions in evolving populations of robots and of self-reproducing entities are described. Possible objections to all these solutions are discussed and these point the way to a more sophisticated notion of embodiment with respect to an environment that seems necessary for practical open-ended evolvability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper investigates how the psychological notion of comfort can be useful in the design of robotic systems. A review of the existing study of human comfort, especially regarding its presence in infants, is conducted with the goal being to determine the relevant characteristics for mapping it onto the robotics domain. Focus is place on the identification of the salient features in the environment that affect the comfort level. Factors involved include current state familiarity, working conditions, the amount and location of available resources, etc. As part of our newly developed comfort function theory, the notion of an object as a psychological attachment for a robot is also introduced, as espoused in Bowlby's theory of attachment. The output space of the comfort function and its dependency on the comfort level are analyzed. The results of the derivation of this comfort function are then presented in terms of the impact they have on robotic behavior. Justification for the use of the comfort function are then presented in terms of the impact they have on robotic behavior. Justification for the use of the comfort function in the domain of robotics is presented with relevance for real-world operations. Also, a transformation of the theoretical discussion into a mathematical framework suitable for implementation within a behavior-based control system is presented. The paper concludes with results of simulation studies and real robot experiments using the derived comfort function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The American Lobster Homarus americanus is a highly mobile marine decapod, ubiquitous to the benthic environment of the eastern North Atlantic. Lobsters occupy a range of subtidal habitats on the continental shelf, and are capable of navigating through spatially complex boulder fields, as well as coping with variable water currents. Given these competencies, we have adopted the lobster as a design model for a biomimetic autonomous underwater vehicle intended for operation in similar environments. A central motor pattern generator model was developed from electromyographic data from lobsters, and is being implemented on an eight-legged ambulatory vehicle. The vehicle uses Nitinol shape-memory alloy wires as linear actuators, physically modeling the antagonistic muscle pairs of a lobster leg. The contraction of the wires is produced by heating them with an electrical current. This produces a change in the crystalline structure of the material from a martensite to an austenite state, resulting in a 5% contraction of the wire. Three pairs of wires are used around three joints to produce a three-degrees-of-freedom walking leg. Current drivers power the actuators, and pulse-width modulation is used to obtain graded contractions from the muscles. The combination of a biologically based control system coupled with a linear actuator sharing many characteristics of invertebrate muscle tissue has enabled us to construct a biomimetic ambulatory robot sharing some of the competencies of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article investigates the neural mechanisms underlying locomotion and visually-guided behavior in a lower vertebrate: the salamander. We develop connectionist models of the salamander's locomotor circuitry and visual system, and analyze their functioning by embedding them into a biomechanical simulation of the salamander's body. This work is therefore an experiment in computational neuroethology which aims at investigating how behavior results from the coupling of a central nervous system (CNS) and a body, and from the interactions of the CNS-body pair with the environment. We believe that understanding these mechanisms is not only relevant for neurobiology but also for potential applications in robotics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional robotics uses non-compliant materials for all components involved in the production of movement. Elasticity is avoided as far as possible, because it leads to hazardous oscillations and makes control of precise movements very difficult. Due to this deliberate stiffness, robots are typically heavy and clumsy structures in comparison to their living counterparts (i.e. man and animals). Yet, moving systems in nature cope not only with the difficulties introduced by compliant materials, they also take advantage of the elasticity in muscles and tendons to produce smooth and even rapid movements. It is understood, that elasticity in a multi-jointed moving system requires sophisticated control mechanisms- as provided by a nervous system or a suitably programmed computer. In this contribution I shall describe a two-jointed robot with purpose-built elasticity in its actuators. This is accomplished by spiral springs places in series with a conventional electric motor and a tendon to the arm. It is shown that, with sufficiently soft elasticity, oscillations can be avoided by active oscillation damping. (Such active oscillation damping presumably also governs movement control in man and animals.) Furthermore, once the major problem has been overcome, elasticity is found to offer a wide spectrum of valuable advantages, as far as the most serious problems in traditional robotics are concerned. They are summarized by terms such as less dangerous, position tolerant, lightweight construction, controlled forces, and ballistic movements. These will be explained in detail and presented for discussion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biologically Inspired Robotics: Orienting Behavior and Navigation
While mobile robots and walking insects can use proprioceptive information (specialized receptors in the insects' leg, or wheel encoders in robots) to estimate distance traveled, flying agents have to rely mainly on visual cues. Experiments with bees provide evidence that flying insects might be using optical flow induced by egomotion to estimate distance traveled. Recently some details of this odometer have been unraveled. In this study, we propose a biologically inspired model of the bee's visual odometer based on Elementary Motion Detectors (EMDs), and present results from goal-directed navigation experiments with an autonomous flying robot platform that we developed specifically for this purpose. The robot is equipped with a panoramic vision system, which is used to provide input to the EMDs of the left and right visual fields. The outputs of the EMDs are in later stage spatially integrated by wide field motion detectors, and their accumulated response is directly used for the odometer. In a set of initial experiments, the robot moves through a corridor on a fixed route, and the outputs of EMDs, the odometer, are recorded. The results show that the proposed model can be used to provide an estimate of the distance traveled, but the performance depends on the route the robot follows, something which is biologically plausible since natural insects tend to adopt a fixed route during foraging. Given these results, we assumed that the optomotor response plays an important role in the context of goal-directed navigation, and we conducted experiments with an autonomous freely flying robot. The experiments demonstrate that this computationally cheap mechanism can be successfully employed in natural indoor environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biological inspiration admits to degrees. This paper describes a new neural processing algorithm inspired by a deeper understanding of the workings of real biological synapses. It is shown that multi-time domain adaptation approach to encoding casual correlation solves the destructive interference problem encountered by more commonly used learning algorithms. It is also shown how this allows an agent to adapt to nonstationary environment in which longer-term changes in the statistical properties occur and are inherently unpredictable, yet not completely lose useful prior knowledge. Finally, it sis suggested that the use of causal correlation coupled with value-based learning may provide pragmatic solutions to some other classical problems in machine learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a sensorimotor architecture integrating computational models of a cerebellum and a basal ganglia and operating on a microrobot. The computational models enable a microrobot to learn to track a moving object and anticipate future positions using a CCD camera. The architecture features pre-processing modules for coordinate transformation and instantaneous orientation extraction. Learning of motor control is implemented using predictive Hebbian reinforcement-learning algorithm in the basal ganglia model. Learning of sensory predictions makes use of a combination of long-term depression (LTD) and long-term potentiation (LTP) adaptation rules within the cerebellum model. The basal ganglia model uses the visual inputs to develop sensorimotor mapping for motor control, while the cerebellum module uses robot orientation and world- coordinate transformed inputs to predict the location of the moving object in a robot centered coordinate system. We propose several hypotheses about the functional role of cell populations in the cerebellum and argue that mossy fiber projections to the deep cerebellar nucleus (DCN) could play a coordinate transformation role and act as gain fields. We propose that such transformation could be learnt early in the brain development stages and could be guided by the activity of the climbing fibers. Proprioceptor mossy fibers projecting to the DCN and providing robot orientation with respect to a reference system could be involved in this case. Other mossy fibers carrying visual sensory input provide visual patterns to the granule cells. The combined activities of the granule and the Purkinje cells store spatial representations of the target patterns. The combinations of mossy and Purkinje projections to the DCN provide a prediction of the location of the moving target taking into consideration the robot orientation. Results of lesion simulations based on our model show degradations similar to those reported in cerebellar lesion studies on monkeys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biologically Inspired Robotics: Sensing and Perception
For a systematic investigation of the interdependence between an agent's morphology and its task environment we constructed a system that is able to automatically generate optimal sensor morphologies for given tasks. The system consists of a robot with an artificial compound eye where the angular positions of the individual facets can be autonomously modified. This paper describes experiments on using artificial evolution to optimize the compound eye morphology for the task of estimating time to contact with obstacles. The resulting morphologies are in good agreement with the theoretically predicted optimal sensor density distribution for this task. By comparing our results with earlier experiments we find that our robot is able to evolve different optimal morphologies depending on the task required. Since the accuracy of our system proved to be good enough to easily distinguish qualitatively different optimal sensor morphologies we hope that also for more complex task environments it will allow us to identify the optimal sensor distribution with good precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The superb aerial performance of flying insects is achieved with comparatively simple neural machinery. We have been investigating the pathway in the locust visual system that signals the rapid approach of an object towards the eye. Two identified neurons have been shown to respond selectively to the images of an object approaching the locust's eye. A neural network based on the input organization of these neurons responds directionally when challenged with approaching and receding objects and reveals the importance of a critical race, between excitation passing down the network and inhibition directed laterally, for the rapid build-up of excitation in response to approaching objects. The strongest response is given to an object approaching on a collision course with they eye, when collision is imminent. Like the neurons, the network is tightly tuned to a collision trajectory. We have incorporated this network into the control structure of a small mobile Kephera robot using the IQR 4021 software we developed. The network responds to looming motion and is effective at evoking avoidance maneuvers in the robot, moving at speeds from 1-12.5cm/s. Our aim is to use the circuit as an artificial looming detector for use in moving vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For local navigation in the vicinity of nests or food sources, insects of several species rely on visual landmarks. An extremely parsimonious description of these navigation abilities is provided by the recently developed average landmark vector (ALV) model. The ALV model is an instance of a parameter-based navigation method: instead of keeping in mind an image of the scene surrounding the target location, only two real-valued parameters are extracted from the image and stored. TO gain insights into the neural architecture that could implement the ALV model in an insect brain, and to test the operation of the model in the real world, a robot equipped with a fully analog implementation of the ALV model was built in different landmark configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Echolocating bats achieve a surprising amount of autonomy primarily based on sonar sensing. In order to use insights from biosonar function to improve technical designs, it is necessary to understand the biosonar tasks (e.g., obstacle avoidance, prey capture, navigation), which provide the context for this function. To facilitate the study of these tasks, a system was designed, which combines the following aspects: It allows for interaction with the real world and mobility by mounting a sonarhead with 6 rotational degrees of freedom on a mobile platform. At the same time the system is capable of displaying the output of a parsimonious auditory model at an appropriate update-rate. This allows for interactive exploration of the echoes associated with a particular echolocation scenario. The use of the system in exploring biosonar tasks is demonstrated by several examples, namely continuous estimation of Doppler shifts as part of an acoustic flow analysis, two-target resolution with fm-signals, acoustic glints and foliage echoes as an example of extended natural targets made up of many reflecting facets. In all cases several important insights into the nature of the respective biosonar problem can be obtained readily by experimenting with the system interactively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sonar is used extensively in mobile robotics for obstacle detection, ranging and avoidance. However, these range- finding applications do not exploit the rich information present in sonar echoes. In addition, mobile robots need robust object recognition systems. The ability to see with sound has long been an intriguing concept. Certain animals, such as bats and dolphins, are able to recognize the shape and nature of objects and to navigate using ultrasound. This work aims to set up and develop hardware and software components of an object recognition system using ultrasonic sensors of the type commonly found on mobile robots. Results demonstrate that sonar can be used as a low-cost, low- computation sensor for real time object recognition tasks on mobile robots. This system differs from all previous approaches in its simplicity, robustness, speed and low cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new paradigm in ground surveillance consists of swarms of autonomous internetted sensors that can be used for target localization and environmental monitoring. The individual component is an inexpensive device containing multiple sensor types, a processor and wireless communication hardware. Scattered over a certain region, these devices are able to detect the direction or proximity of targets. One of the most limiting factors of the devices is the battery supply. In order to conserve power, these units should be able to adjust their activities to the current situations. Energy consuming signal processing should only be performed if the quality of the raw sensor data promises a significant improvement to the localization results. We propose a self-organized control system that allows the devices to select the algorithm complexity which balances the requirements for good localization performance and energy conservation. The devices make their selection autonomously, based on their own sensor data, information that they receive from other devices in the region, and the amount of energy they have left. The capability of this system will be demonstrated via computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the concept of networked robotics. The networked robotics concept develops the viewpoint that robotics systems are collections of resources that are resident at identified locations on a network and, by means of an appropriate configuration process, can be connected together into a system that can perform some desired robotic task. Under this scheme a physical robot platform, a mobile robot for example, is modeled as a set of resources and a set of such platforms contributes their resources to a resource pool. Configuration patterns define the way in which these resources are configured to create a distributed architecture, which may span more than one physical robot platform. The emphasis on distributed, configurable resources makes the concept of networked robotics particularly relevant to the areas of modular and cooperative robotics. In this paper we identify the scope of networked robotics and the key research issues it raises, including the definition of resources, their configuration into control architectures and the relation of these configuration patterns to task models. We consider the relevance of networked robotics to other areas of robotics, in particular modular and cooperative robotics, and we explore technologies and tools that might support the networked robotics concept in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Paul S. Schenker, Terrance L. Huntsberger, Paolo Pirjanian, Ashitey Trebi-Ollennu, Hari Das, Sanjay S. Joshi, Hrand Aghazarian, A. J. Ganino, Brett A. Kennedy, et al.
We report on the development of cooperating multiple robots. This work builds form our earlier research on autonomous planetary rovers and robot arms. Here, we seek to closely coordinate the mobility and manipulation of multiple robots to perform site construction operations- as an example, the autonomous deployment of a planetary power station- a task viewed as essential to a sustained robotic presence and human habitation on Mars. There are numerous technical challenges; these include the mobile handling of extended objects, as well as cooperative transport/navigation of such objects over natural, unpredictable terrain. We describe an extensible system concept, related simulations, a hardware implementation, and preliminary experimental results. In support of this work we have developed an enabling hybrid control architecture wherein multi-robot mobility and sensor-based controls are derived as group compositions and coordination of more basic behaviors under a task-level multi-agent planner. We summarize this Control Architecture for Multi-robot Planetary Outposts (CAMPOUT), and its application to physical experiments where two rovers carry an extended payload over natural terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A manned Mars habitat will require a significant amount of infrastructure that can be deployed using robotic precursor missions. This infrastructure deployment will probably include the use of multiple, heterogeneous, mobile robotic platforms. Delays due to the long communication path to Mars limit the amount of teleoperation that is possible. A control architecture called CAMPOUT (Control Architecture for Multirobot Planetary Outposts) is currently under development at the Jet Propulsion Lab in Pasadena, CA. It is a three layer behavior-based system that incorporates the low level control routines currently used on the JPL SRR/FIDO/LEMUR rovers. The middle behavior layer uses either the BISMARC (Biologically Inspired System for Map- based Autonomous Rover Control) or MOBC (Multi-Objective Behavior Control) action selection mechanisms. CAMPOUT includes the necessary group behaviors and communication mechanisms for coordinated/cooperative control of heterogeneous robotic platforms. We report the results of some ongoing work at the jet Propulsion Lab in Pasadena, CA on the transport phase of a photovoltaic (PV) tent deployment mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of the cooperation between the University of Souther California (USC) and the Institute of Robotics Research (IRF) of the University of Dortmund experiments regarding the control of robots over long distances by means of virtual reality based man machine interfaces have been successfully carried out. In this paper, the newly developed virtual reality system that is being used for the control of a multi-robot system for space applications as well as for the control and supervision of industrial robotics and automation applications is presented. The general aim of the development was to provide the framework for Projective Virtual Reality which allows users to project their actions in the virtual world into the real world primarily by means of robots but also by other means of automation. The framework is based on a new approach which builds on the task deduction capabilities of a newly developed virtual reality system and a task planning component. The advantage of this new approach is that robots which work at great distances from the control station can be controlled as easily and intuitively as robots that work right next to the control station. Robot control technology now provides the user in the virtual world with a prolonged arm into the physical environment, thus paving the way for a new quality of user-friendly man machine interfaces for automation applications. Lately, this work has been enhanced by a new structure that allows to distribute the virtual reality application over multiple computers. With this new step, it is now possible for multiple users to work together in the same virtual room, although they may physically be thousands of miles apart. They only need an Internet or ISDN connection to share this new experience. Last but not least, the distribution technology has been further developed to not just allow users to cooperate but to be able to run the virtual world on many synchronized PCs so that a panorama projection or even a cave can be run with 10 synchronized PCs instead of high-end workstations, thus cutting down the costs for such a visualization environment drastically and allowing for a new range of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent autonomous robotic systems require efficient safety components to assure system reliability during the entire operation. Especially if commanded over long distances, the robotic system must be able to guarantee the planning of safe and collision free movements independently. Therefore the IRF developed a new collision avoidance methodology satisfying the needs of autonomous safety systems considering the dynamics of the robots to protect. To do this, the collision avoidance system cyclically calculates the actual collision danger of the robots with respect to all static and dynamic obstacles in the environment. If a robot gets in collision danger the methodology immediately starts an evasive action to avoid the collision and guides the robot around the obstacle to its target position. This evasive action is calculated in real-time in a mathematically exact way by solving a quadratic convex optimization problem. The secondary conditions of this optimization problem include the potential collision danger of the robots kinematic chain including all temporarily attached grippers and objects and the dynamic constraints of the robots. The result of the optimization procedure are joint accelerations to apply to prevent the robot from colliding and to guide it to its target position. This methodology has been tested very successfully during the Japanese/German space robot project GETEX in April 1999. During the mission, the collision avoidance system successfully protected the free flying Japanese robot ERA on board the satellite ETS-VII at all times. The experiments showed, that the developed system is fully capable of ensuring the safety of such autonomous robotic systems by actively preventing collisions and generating evasive actions in cases of collision danger.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commanding complex robotic systems over long distances in an intuitive manner requires new techniques of man-machine- interaction. A first disadvantage of conventional approaches is that the user has to be a robotic expert because he directly has to command the robots. He often is part of the real-time control loop while moving the robot and thus has to cope with long delays. Experience with space robot missions showed that it is very difficult to control a robot just by camera images. At the IRF, a new approach to overcome such problems was developed. By means of Projective Virtual Reality, we introduce a new, intuitive way of man-machine communication based on a combination of action planning and Virtual Reality methods. Using data-helmet and data-glove the user can immerse into the virtual world and interact with the virtual objects as he would do in reality. The Virtual Reality System derives the user's intention from his actions and then projects the tasks in to the physical world by means of robots. The robots carry out the action physically that is equivalent to the user's action in the virtual world. The developed Projective Virtual Reality System is of especially great use for space applications. During the joint project GETEX (German ETS-VII Experiment), the IRF realized the telerobotic ground station for the free flying robot ERA on board the Japanese satellite ETS-VII. During the mission in April 1999 the Virtual Reality based command interface turned out to be an ideally suited platform for the intuitive commanding and supervision of the robot in space. During the mission, it first had to be verified that the system is fully operational, but then out Japanese colleagues allowed to take the full control over the real robot by the Projective Virtual Reality System. The final paper will describe key issues of this approach and the results and experiences gained during the GETEX mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lars K. Baekdal, Ivar Balslev, Rene Dencker Eriksen, Soren Peder Jensen, Bo N. Jorgensen, Brian Kirstein, Bent B. Kristensen, Martin M. Olsen, John W. Perram, et al.
RoBlock is the first phase of an internally financed project at the Institute aimed at building a system in which two industrial robots suspended from a gantry, as shown below, cooperate to perform a task specified by an external user, in this case, assembling an unstructured collection of colored wooden blocks into a specified 3D pattern. The blocks are identified and localized using computer vision and grasped with a suction cup mechanism. Future phases of the project will involve other processes such as grasping and lifting, as well as other types of robot such as autonomous vehicles or variable geometry trusses. Innovative features of the control software system include: The use of an advanced trajectory planning system which ensures collision avoidance based on a generalization of the method of artificial potential fields, the use of a generic model-based controller which learns the values of parameters, including static and kinetic friction, of a detailed mechanical model of itself by comparing actual with planned movements, the use of fast, flexible, and robust pattern recognition and 3D-interpretation strategies, integration of trajectory planning and control with the sensor systems in a distributed Java application running on a network of PC's attached to the individual physical components. In designing this first stage, the aim was to build in the minimum complexity necessary to make the system non-trivially autonomous and to minimize the technological risks. The aims of this project, which is planned to be operational during 2000, are as follows: To provide a platform for carrying out experimental research in multi-agent systems and autonomous manufacturing systems, to test the interdisciplinary cooperation architecture of the Maersk Institute, in which researchers in the fields of applied mathematics (modeling the physical world), software engineering (modeling the system) and sensor/actuator technology (relating the virtual and real worlds) could collaborate with systems integrators to construct intelligent, autonomous systems, and to provide a showpiece demonstrator in the entrance hall of the Institute's new building.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this research is to monitor and control the vehicle motion in order to remove out the existing safety risk based upon the human-machine cooperative vehicle control. A predictive control method is proposed to control the steering wheel of the vehicle to keep the lane. Desired angle of the steering wheel to control the vehicle motion could be calculated based upon vehicle dynamics, current and estimated pose of the vehicle every sample steps. The vehicle pose and the road curvature were calculated by geometrically fusing sensor data from camera image, tachometer and steering wheel encoder through the Perception Net, where not only the state variables, but also the corresponding uncertainties were propagated in forward and backward direction in such a way to satisfy the given constraint condition, maintain consistency, reduce the uncertainties, and guarantee robustness. A series of experiments was conducted to evaluate the control performance, in which a car like robot was utilized to quit unwanted safety problem. As the results, the robot was keeping very well a given lane with arbitrary shape at moderate speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid and high precision data acquisition methodology from coordinate components with free-form surface and geometrical model can be implemented widely. Typical application covers part localization, automatic calibration and reverse engineering. Integrated structured light vision sensor with Cmm (Coordinate Measurement Machine) enhances the high- precision coordinate measurement capability. In this paper a curvature-based adaptive sampling approach and the evaluating index for the sampling precision are presented. The matching and subdividing algorithm for generating matrix-type mesh data from sample points is described. The methodology to register and merge the measured data from multiple viewpoints to model the free-form surface is also presented. Based on the given initial coordinate rotation matrix R and transformation vector T, the different viewpoints can be translated into a unique frame of reference. By introducing special coordinate of 3D space, the registered data is divided into mesh, which cover the whole surface of object. An application example for shoe modeling is described to illustrate the advantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the means for generating rover localization information for NASA/JPL's FIDO rover. This is accomplished using a sensor fusion framework which combines wheel odometry with sun sensor and inertial navigation sensors to provide an integrated state estimate for the vehicle's position and orientation relative to a fixed reference frame. This paper describes two separate state estimation approaches built around the extended Kalman filter formulation and the Covariance Intersection formulation. Experimental results from runs in JPL's MarsYard are presented in order to compare the state estimates generated using each formulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of concurrent mapping and localization (CML) is to enable a mobile robot to build a map of an unknown environment, while simultaneously using this map to navigate. This paper discusses some of the challenges that are encountered in the development of practical real-time implementations of CML for one or more autonomous mobile robots operating in large-scale environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents current work on decentralized data fusion (DDF) applied to multiple unmanned aerial vehicles. The benefits of decentralizing algorithms, particularly in this field, are enormous. At a mission level, multiple aircraft may fly together sharing information with one another in order to produce more accurate and coherent estimates, and hence increase the chances of success. At the single platform level, algorithms may be decentralized throughout the airframe reducing the probability of catastrophic failure by eliminating the dependency on a particular central processing facility. To this end, a complex simulator has been developed to test and evaluate decentralized picture compilation, platform localization and simultaneous localization and map building (SLAM) algorithms which are to be implemented on multiple airborne vehicles. This simulator is both comprehensive and modular, enabling multiple platforms carrying multiple distributed sensors to be modeled and interchanged easily. The map building and navigation algorithms interface with both the simulator and the real airframe in exactly the same way in order to evaluate the actual flight code as comprehensively as possible. Logged flight data can also be played back through the simulator to the navigation routines instead of simulated sensors. This paper presents the structure of both the simulator and the algorithms that have been developed. An example of decentralized map building is included, and future work in decentralized navigation and SLAM systems is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decentralized systems require no central controller or center where information is fused or commands generated. Information theoretic ideas have been previously used to develop optimal fusion algorithms for decentralized sensing and data fusion systems. The work described in this paper aims to develop equivalent algorithms for the control of decentralized systems. The methods and algorithms described center on the use of mutual information gain as a measure in choosing control actions. Two example problems are described; area coverage for purposes of surveillance and navigation, and sensor management for cuing and hand-off operations. The motivation for this work is the control of multiple unmanned air vehicles (UAVs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of the future missions for mobile robots demand multi- robot systems which are capable of operating in large environments for long periods of time. One of the most critical capabilities is the ability to localize- a mobile robot must be able to estimate its own position and to consistently transmit this information to other robots and control sites. Although state-of-the-art GPS is capable of yielding unmatched performance over large areas, it is not applicable in many environments (such as within city streets, under water, indoors, beneath foliage or extra- terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. However, most approaches are non-scalable (the storage and computational costs vary quadratically and cubically with the number of beacons in the map) and can only be used with multiple robotic vehicles with a great degree of difficulty. In this paper, we describe the development of a scalable, multiple-vehicle SLAM system. This system, based on the Covariance Intersection algorithm, is scalable- its storage and computational costs are linearly proportional to the number of beacons in the map. Furthermore, it is scalable to multiple robots- each has complete freedom to exchange partial or full map information with any other robot at any other time step. We demonstrate the real-time performance of this system in a scenario of 15,000 beacons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present algorithms for planning the motion of robotic Molecules on a substrate of other Molecules. Our approach is to divide self-reconfiguration planning into three levels: trajectory planning, configuration planning, and task-level planning. This paper focuses on algorithms for configuration planning, moving a set of Molecules from a starting configuration to a goal configuration. We describe our scaffold planning approach in which the interior of a structure contains 3D tunnels. This allows Molecules to move within a structure as well as on the surface, simplifying Molecule motion planning as well as increasing parallelism. In addition, we present a new gripper-type connection mechanism for the Molecule which does not require power to maintain connections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this manuscript, we discuss new solutions for mechanical design and motion planning for a class of 3D modular self- reconfigurable robotic system, namely I-Cubes. This system is a bipartite collection of active links that provide motions for self-reconfiguration, and cubes acting as connection points. The links are three degree of freedom manipulators that can attach to and detach from the cube faces. The cubes can be positioned and oriented using the links. These capabilities enable the system to change its shape and perform locomotion tasks over difficult terrain. This paper describes the scaled down version of the system previously described in and details the new design and manufacturing approaches. Initially designed algorithms for motion planning of I-Cubes are improved to provide better results. Results of our tests are given and issues related to motion planning are discussed. The user interfaces designed for the control of the system and algorithm evaluation is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modular reconfigurable robots can change their connectivity from one arrangement to another. Performing this change involves a difficult planning problem. We study this problem by representing robot configurations as graphs, and giving an algorithm that can transform any configuration of a robot into any other in O (log n) steps. Here n is the number of modules which can attach to more than two other modules. We also show that O(log n) is best possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future planetary exploration missions will use rovers to perform tasks in rough terrain. Working in such conditions, a rover could become trapped due to loss of wheel traction, or even tip over. The Jet Propulsion Laboratory has developed a new rover with the ability to reconfigure its structure to improve tipover stability and ground traction. This paper presents a method to control this reconfigurability to enhance system tipover stability. A stability metric is defined and optimized online using a quasi-static model. Simulation and experimental results for the Sample Return Rover (SSR) are presented. The method is shown to be practical and yields significantly improved stability in rough terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Communications Research Laboratory has been studying the inspection technology needed for the first step of an Orbital Maintenance System (OMS) for maintaining space systems by inspecting satellites, re-orbiting useless satellites, and simply repairing satellites in orbit. OMS will use a modular manipulator for remote inspection. One of the most important issues concerning control of the modular manipulator is a determination process that utilizes its decentralized control architecture. In this paper, we introduce a decentralized kinematics control algorithm that automatically adapts to partial faults and reconfigures itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the results of several experiments aimed at understanding the performance, behavior, and limitations of Darwin2K, an automated system for robot configuration synthesis and optimization. Two design tasks are addressed: the design of a robot for walking along trusses in zero gravity, and of a manipulator for a coverage task. We explore the impact of several factors on synthesizer performance, including the set of robot component and assemblies available to the synthesizer, the set of metrics used to quantify robot performance, and the scope of the task on which robots are assessed. The meaning and impact of the experimental results is given, and we discuss potential improvements in the method of use of Darwin2K as well as future improvements to the system itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem addressed in the distributed reconfiguration of a metamorphic robotic system composed of any number of two dimensional hexagonal modules from specific initial to specific goal configurations. The initial configuration considered is a straight chain of modules, while the goal configurations considered satisfy a more general admissibility condition. A centralized algorithm is described for determining whether an arbitrary goal configuration is admissible. The main result of the paper is a distributed algorithm for reconfiguring a straight chain into an admissible goal configuration. Different heuristics are proposed to improve the performance of the reconfiguration algorithm and simulation results demonstrate the use of these heuristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While significant recent progress has been made in development of mobile robots for planetary surface exploration, there remain major challenges. These include increased autonomy of operation, traverse of challenging terrain, and fault-tolerance under long, unattended periods of use. We have begun work which addresses some of these issues, with an initial focus on problems of high risk access, that is, autonomous roving over highly variable, rough terrain. This is a dual problem of sensing those conditions which require rover adaptation, and controlling the rover actions so as to implement this adaptation in a well understood way (relative to metrics of rover stability, traction, power utilization, etc.). Our work progresses along several related technical lines: 1) development a fused state estimator which robustly integrates internal rover state and externally sensed environmental information to provide accurate configuration information; 2) kinematic and dynamical stability analysis of such configurations so as to determine predicts for a needed change of control regime (e.g., traction control, active c.g. positioning, rover shoulder stance/pose); 3) definition and implementation of a behavior-based control architecture and action-selection strategy which autonomously sequences multi-level rover controls and reconfiguration. We report on these developments, both software simulations and hardware experimentation. Experiments include reconfigurable control of JPS's Sample Return Rover geometry and motion during its autonomous traverse over simulated Mars terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel type of modular robotic system that has a capability of both autonomous shape reconfiguration and robotic motion generation. We have developed hardware modules and examined basic mechanical functions and reconfiguration. A simulation system of the modular robotic system has also been developed to design the reconfiguration sequence and motion for a cluster of modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We provide an overview of the software components underlying four different tasks performed by a heterogeneous group of mobile robots. These tasks are drawn from three domains 1. Robot Competitions (Robot Soccer and Find Life on Mars), 2. Security and Surveillance (Perimeter Protection) and 3. Building Environmental Models (Multi-Robot Navigation and Mapping). Once decomposed as a set of cooperating behaviors, we show how these (seemingly unrelated) tasks lead to similar solutions as far as their modular breakdown is concerned, thereby yielding high reusability. Although our collection of robot platforms is notably diverse in terms of mechanics, sensory and computational capabilities, cross-platform migration and extension of existing behavior assemblages require minimal programming effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technical details of laboratory based robotic system for researching decentralized Simultaneous Localization and Map building (SLAM) are provided. The main components of the system are Pioneer (ActivMedia) robots, laboratory environment for mapping, laser tracking system for testing the SLAM accuracy and a suite of SLAM software algorithms. The system is used to provide a demonstration and initial practical results of decentralized multiple-platform SLAM. The paper concludes that useful system has been set-up for researching this technology area. Further, the demonstration highlights important benefits of multiple- platform decentralized SLAM over a single platform approach. These include an increase in map accuracy, an improvement in the completeness and timeliness of the map, and an increase in platform accuracy although that platform was not extrinsically sensed. Future research areas are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.