KEYWORDS: Sensors, Robotics, Network architectures, Control systems, Computer architecture, Robotic systems, Robot vision, Information operations, Mars, Process control
In this paper we present the Networked Robotics approach to dynamic robotic architecture creation. Building on our prior work we highlight the ease at which system and architecture creation can be moved from the single robot domain to the cooperative/multiple robotic domain; indeed under the Networked Robotic framework there are no differences between the two, a multiple, cooperative, robotic architecture simply emerges from a richer network environment (the module pool). Essentially task-driven architectures are instantiated on an as needed basis, allowing conceptualised designs to be run wherever a suitable framework (i.e. a module pool) exists. Using a basic scenario, that of mapping an environment, we show how radically different architectures for achieving the same task can emerge from the same building blocks. We highlight the flexibility and robustness of the instantiated architectures and the experimental freedom inherent in the approach. The approach has been implemented and tested experimentally.
During the last decade, there has been significant progress toward a supervised autonomous robotic capability for remotely controlled scientific exploration of planetary surfaces. While planetary exploration potentially encompasses many elements ranging from orbital remote sensing to subsurface drilling, the surface robotics element is particularly important to advancing in situ science objectives. Surface activities include a direct characterization of geology, mineralogy, atmosphere and other descriptors of current and historical planetary processes-and ultimately-the return of pristine samples to Earth for detailed analysis. Toward these ends, we have conducted a broad program of research on robotic systems for scientific exploration of the Mars surface, with minimal remote intervention. The goal is to enable high productivity semi-autonomous science operations where available mission time is concentrated on robotic operations, rather than up-and-down-link delays. Results of our work include prototypes for landed manipulators, long-ranging science rovers, sampling/sample return mobility systems, and more recently, terrain-adaptive reconfigurable/modular robots and closely cooperating multiple rover systems. The last of these are intended to facilitate deployment of planetary robotic outposts for an eventual human-robot sustained scientific presence. We overview our progress in these related areas of planetary robotics R&D, spanning 1995-to-present.
In this paper we describe the application of the MARS model, for modelling and reasoning about modular robot systems, to modular manipulators. The MARS model provides a mechanism for describing robotic components and a method for reasoning about the interaction of these components in modular manipulator configurations. It specifically aims to articulate functionality that is a property of the whole manipulator, but which is not represented in any one component. This functionality arises, in particular, through the capacity for modules to inherit functionality from each other. The paper also uses the case of modular manipulators to illustrate a number of features of the MARS model, including the use of abstract and concrete module classes, and to identify some current limitations of the model. The latter provide the basis for ongoing development of the model.
TORUS (for Toys Operated Remotely for Understanding Science) is an Internet-based educational project aimed at exploiting toys to create interesting and challenging robotics demonstrations and problem scenarios for students. In this paper we describe the implementation and evaluation of the TORUS "Construction Site" which aims to demonstrate key features of remote teleoperation via a multi-user cooperative task scenario. In this scenario three vehicles (toys) are remotely operated by three separate users to collectively complete a simple cyclical task. The users have separate control for their respective devices and the choice of three camera views. We describe the setup of the site, the control interfaces and the overall architecture of the Construction Site scenario. We report on initial evaluation of the scenario, the enhancements already implemented and others that are planned. One of the key features of TORUS is the use of toys as the remote "robotic" devices. The use of toys removes the usual costs and the safety requirements normally associated with using real robot devices. We discuss the scope for the further development of this approach and its potential for supporting both introductory and advanced robotics and artificial intelligence education via the Internet.
In this paper we present the concept of networked robotics. The networked robotics concept develops the viewpoint that robotics systems are collections of resources that are resident at identified locations on a network and, by means of an appropriate configuration process, can be connected together into a system that can perform some desired robotic task. Under this scheme a physical robot platform, a mobile robot for example, is modeled as a set of resources and a set of such platforms contributes their resources to a resource pool. Configuration patterns define the way in which these resources are configured to create a distributed architecture, which may span more than one physical robot platform. The emphasis on distributed, configurable resources makes the concept of networked robotics particularly relevant to the areas of modular and cooperative robotics. In this paper we identify the scope of networked robotics and the key research issues it raises, including the definition of resources, their configuration into control architectures and the relation of these configuration patterns to task models. We consider the relevance of networked robotics to other areas of robotics, in particular modular and cooperative robotics, and we explore technologies and tools that might support the networked robotics concept in practice.
While significant recent progress has been made in development of mobile robots for planetary surface exploration, there remain major challenges. These include increased autonomy of operation, traverse of challenging terrain, and fault-tolerance under long, unattended periods of use. We have begun work which addresses some of these issues, with an initial focus on problems of high risk access, that is, autonomous roving over highly variable, rough terrain. This is a dual problem of sensing those conditions which require rover adaptation, and controlling the rover actions so as to implement this adaptation in a well understood way (relative to metrics of rover stability, traction, power utilization, etc.). Our work progresses along several related technical lines: 1) development a fused state estimator which robustly integrates internal rover state and externally sensed environmental information to provide accurate configuration information; 2) kinematic and dynamical stability analysis of such configurations so as to determine predicts for a needed change of control regime (e.g., traction control, active c.g. positioning, rover shoulder stance/pose); 3) definition and implementation of a behavior-based control architecture and action-selection strategy which autonomously sequences multi-level rover controls and reconfiguration. We report on these developments, both software simulations and hardware experimentation. Experiments include reconfigurable control of JPS's Sample Return Rover geometry and motion during its autonomous traverse over simulated Mars terrain.
KEYWORDS: Robots, Sensors, Robotic systems, Cameras, Systems modeling, Robotics, Software frameworks, Signal processing, Process control, Data modeling
Modular robotics considers robots to be composed of distinct functional modules, which may be combined to perform tasks. Module combination allows functionality to be tailored to a task, but has a number of requirements. The notion of a 'module' must be clearly defined, and functionality specified in an unambiguous manner. The nature of the combination must be defined according to specific relationships between modules, and the consequences arising form combination - for modules and for the robot as a whole - need to be considered. The MARS model has been developed to allow reasoning about module combination, in terms of the ways in which modules may be combined, and the consequences that may arise form that combination. This paper defines a set of consequences which may arise during module combination.
KEYWORDS: Sensors, Systems modeling, Dynamical systems, Robotic systems, Environmental sensing, Complex systems, Signal detection, Nonlinear dynamics, Sensing systems, Control systems
In this paper we investigate a model for self-organizing modular robotic systems based upon dynamical systems theory. Sonar sensing is used as a case study, and the effects of nonlinear interactions between sonar sensing modules are examined. We present and analyze an initial set of results based upon an implementation of the model in simulation. The results show that the sonar sensors organize the relative phase of their sampling in response to changes in the demand placed on them for sensory data. Efficient sampling rates are achieved by the system adapting to take advantage of features in the environment. We investigate the types of phase patterns that emerge, and examine their relationship with symmetries present in the environment.
In this paper we propose a model for agent-based control within teleoperation environments. We illustrate their role in providing automated assistance for task viewing. The paper reviews existing approaches to viewing support, which tend to focus on augmented displays. We outline our approach to providing viewing support, based on 'visual acts.' Agent-based architectures are reviewed and their application to viewing support under the visual acts model is presented. Communication is a key requirement for agent architectures. We present a system, channels, which we are currently developing to support the implementation of the agent model.
This paper explores the design of robot systems to take advantage of non-linear dynamic systems models, specifically symmetry breaking phenomena, to self-organize in response to task and environment demands. Recent research in the design of robotics systems has stressed modular, adaptable systems operating under decentralized and distributed control architectures. Cooperative and emergent behavioral structures can be built on these modules by exploiting various forms of communication and negotiation strategies. We focus on the design of individual modules and their cooperative interaction. We draw on nonlinear dynamic system models of human and animal behavior to motivate issues in the design of robot modules and systems. Sonar sensing systems comprising a ring of sonar sensors are used to illustrate the ideas within a networked robotics context, where distributed sensing modules located on multiple robots can interact cooperatively to scan an environment.
Object orientation has been widely used in the development of robotic systems for its benefits of modularity and reusability. Modular robotics systems are designed to be flexible, reusable, easily extendible sets of robotic resources that may be structured in various fashions to solve tasks. We describe the role of object oriented methods in the development of a modular robotic system and show how such a system supports collaborative working through a networked laboratory environment. We present an architectural framework for modular robotics, which employs and emphasizes object oriented techniques.
Visual acts are patterns of viewing displayed by an operator carrying out a remote manipulation operation. Automated camera control under the guidance of these visual acts means that the operator can concentrate on the manipulation aspect of the task. Initial theoretical studies have suggested an approach to deriving visual acts based on exploiting human perceptual models of visual discrimination. This paper reports on initial studies aimed at implementing an automated viewing system based on multi-agent architecture. The paper reviews the automated viewing model we are proposing and explores the nature of the agent- based architectures that we are considering for the realization of the automated viewing system.
We report on work currently underway to put a robotics laboratory onto the Internet in support of teaching and research in robotics and artificial intelligence in higher education institutions in the UK. The project is called Netrolab. The robotics laboratory comprises a set of robotics resources including a manipulator, a mobile robot with an on-board monocular active vision head and a set of sonar sensing modules, and a set of laboratory cameras to allow the user to see into the laboratory. The paper will report on key aspect of the project aimed at using multimedia tools and object-oriented techniques to network the robotics resources and to allow them to be configured into complex teaching and experimental modules. The paper will outline both the current developments of Netrolab and provide a perspective on the future development of networked virtual laboratories for research.
Conventional mobile robotic systems are `stand alone'. Program development involves loading programs into the mobile, via an umbilical. Autonomous operation, in this context, means `isolation': the user cannot interact with the program as the robot is moving around. Recent research in `swarm robotics' has exploited wireless networks as a means of providing inter- robot communication, but the population is still isolated from the human user. In this paper we report on research we are conducting into the provision of mobile robots as resources on a local area computer network, and thus breaking the isolation barrier. We are making use of new multimedia workstation and wireless networking technology to link the robots to the network in order to provide a new type of resource for the user. We model the robot as a set of resources and propose a client-server architecture as the basis for providing user access to the robots. We describe the types of resources each robot can provide and we outline the potential for cooperative robotics, human-robot cooperation, and teleoperation and autonomous robot behavior within this context.
Humans use their senses, particularly vision, to interrogate the environment in search of information pertinent to the performance of a task. We say that the user has `visual goals', and we associate `visual acts' with these goals. Visual acts are patterns of `looking' displayed in acquiring the information. In this paper we present a model for visual acts which is based on known features of the human visual perception system and to illustrate the model we use as a case study a task which is typical of mechanical manipulation operations. The model is based on human perceptual discrimination and is motivated by a query-based model of the observer.
In this paper we report on research we are carrying out on camera localization and control for remote viewing during teleoperation. We present an approach to the localization problem which exploits a model of the task environment to guide the selection of vision filters for picking out `interesting' features. This research adapts the `interest' operator ideas of Moravec within a model-based vision framework. We present initial results which demonstrate the utility of feature-sensitive interest operators for picking out key visual landmarks.
Workspace viewing in teleoperation systems is normally constrained by the fixed position of the cameras relative to the teleoperator. `Virtual viewing' and `free-flying' cameras reduce these constraints but alter the mapping between teleoperator control and perception leading to reduced visual performance during teleoperation. We are investigating the effects of non- anthropomorphic viewing on teleoperation performance and its implications for the design of teleoperation systems. In this paper we present results from an initial set of experiments.
Being able to see the face of a speaker can improve speech recognition performance by as much as a shift from 20% to 80% intelligibility under certain circumstances. Lip movements provide a major source of visual cues in speech recognition. In our research we are concerned with locating, tracking, characterizing, and exploiting the lip movements for this purpose. In this paper we focus on the first of these problems. Using a technique based on n-Tuples we locate the `eye-nose-region' (ENR) of the face in images and infer the location of the mouth via a `face model.' We describe this method in detail and present initial test results.
An approach to the design of sensor-based robotic systems based on sensori-motor modules is proposed. These modules are motivated by a horizontal sensori-motor organization of the brain. Each module performs a specific function, which involves the extraction of some specific item of information from the environment. The proposed approach fits in with a task- driven approach to the design of sensor-based robotic systems. The sensori-motor modules are described and their composition and integration in the design of sensor-based robotic systems is discussed. The proposal offers the potential for the development of a systematic approach to the design of sensor-based robotic systems and the provision of a set of 'off the shelf' building blocks for their practical implementation.
The brain is a complex perceptual-motor system. To assemble an understanding of this system we need some ploy to get a grip on this complexity. Beginning at the level of individual muscles, a functional model for organizing the brain is developed. This forms the basis for an approach to the design of intelligent robotic systems based on the idea of composing complex systems from primitive functional modules. This paper addresses the sensor fusion aspects of this composition. We identify perceptual components of primitive functional modules, and address the problems of sensor fusion when composing complex perceptual systems employing an array of environmental sensory modalities. The discussion is centered on the human perceptual-motor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.