PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Autonomous mobile robots rely on multiple sensors to perform a varied number of tasks in a given environment. Different tasks may need different sensors to estimate different subsets of world state. Also, different sensors can cooperate in discovering common subsets of world state. This paper presents a new approach to multimodal sensor fusion using dynamic Bayesian networks and an occupancy grid. The environment in which the robot operates is represented with an occupancy grid. This occupancy grid is asynchronously updated using probabilistic data obtained from multiple sensors and combined using Bayesian networks. Each cell in the occupancy grid stores multiple probability density functions representing combined evidence for the identity, location and properties of objects in the world. The occupancy grid also contains probabilistic representations for moving objects. Bayes nets allow information from one modality to provide cues for interpreting the output of sensors in other modalities. Establishing correlations or associations between sensor readings or interpretations leads to learning the conditional relationships between them. Thus bottoms-up, reflexive, or even accidentally-obtained information can provide tops-down cues for other sensing strategies. We present early results obtained for a mobile robot navigation task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we compare the performance of a dead-reckoning system for robot navigation to a system using an extended Kalman filter (EKF). Dead-reckoning systems are able to approximate position and orientation by feeding data (provided usually by local sensors) to the kinematic model of the vehicle. These systems are subject to many different sources of error. EKFs have the ability to combine the same information and compensate for most of these errors to yield a better estimate. Our simulation results using a simplified kinematic model of Rocky 7 [an experimental rover used in the Mars exploration program at Jet Propulsion Laboratory (JPL)] show that an improvement in performance up to 40% (position error) can be achieved. The local sensors used are: wheel encoders, steering angle potentiometer and gyroscope. Involvement of global sensor measurements can drastically increase the accuracy of the estimate. The lack of GPS or magnetic field on Mars narrows our choices for global localization. Landmarks, such as the sun can be used as natural beacons (reference point for absolute measuremnts). A sun sensor (SS) that measures the absolute orientation of the rover has been built by Lockheed Martin and now is part of the sensor suite of Rocky 7. The SS measurement is crucial for the estimation filter and we show that the accuracy of the estimation decreases exponentially as the frequency of the SS data fed to the EKF decreases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the multisensor estimation problem for both linear and nonlinear systems in a fully connected decentralized sensing architecture. The sensor data fusion problem is identified and the case for decentralized architectures, rather than hierarchical or centralized ones, is made. Fully connected decentralized estimation algorithms in both state and information spaces are then developed. The intent is to show that decentralized estimation is feasible and to demonstrate the advantages of information space over state space. The decentralization procedure is then repeated for the extended Kalman filter and extended information filter to produce decentralized filters for nonlinear systems. The four filters are compared and contrasted. In appraising the algorithms the problems associated with the requirement for a fully connected topology are identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of automatically reducing uncertainties and calibrating possible biases involved in sensed data and extracted features by a system based on the geometric data fusion is presented. The perception net, as a structural representation of the sensing capabilities of a system, connects features of various levels of abstraction, referred to here as logical sensors, with their functional relationships such as feature transformations, data fusions, and constraints to be satisfied. The net maintains the consistency of logical sensors based on the forward propagation of uncertainties as well as the backward propagation of constraint errors. A novel geometric data fusion algorithm is presented as a unified framework for computing forward and backward propagation through which the net achieves the self-reduction of uncertainties and self- calibration of biases. The effectiveness of the proposed method is validated through simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how variable structure control can be used to describe the overall behavior of multiple autonomous robotic vehicles with simple finite state machine rules. The importance of this result is that we can then begin to design provable asymptotically stable group behaviors from a set of simple control laws and appropriate switching points with variable structure control. The ability to prove convergence to a goal is especially important for applications such as locating military targets or land mines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instrumental conditioning is a psychological process whereby an animal learns to associate its actions with their consequences. This type of learning is exploited in animal training techniques such as 'shaping by successive approximations,' which enables trainers to gradually adjust the animal's behavior by giving strategically timed reinforcements. While this is similar in principle to reinforcement learning, the real phenomenon includes many subtle effects not considered in the machine learning literature. In addition, a good deal of domain information is utilized by an animal learning a new task; it does not start from scratch every time it learns a new behavior. For these reasons, it is not surprising that mobile robot learning algorithms have yet to approach the sophistication and robustness of animal learning. A serious attempt to model instrumental learning could prove fruitful for improving machine learning techniques. In the present paper, we develop a computational theory of shaping at a level appropriate for controlling mobile robots. The theory is based on a series of mechanisms for 'behavior editing,' in which pre-existing behaviors, either innate or previously learned, can be dramatically changed in magnitude, shifted in direction, or otherwise manipulated so as to produce new behavioral routines. We have implemented our theory on Amelia, an RWI B21 mobile robot equipped with a gripper and color video camera. We provide results from training Amelia on several tasks, all of which were constructed as variations of one innate behavior, object-pursuit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we demonstrate how principles of multiple objective decision making (MODM) can be used to analysis, design and implement multiple behavior based systems. A structured methodology is achieved where each system objective, such as obstacle avoidance or convoying, is modeled as a behavior. Using MODM we formulate mechanisms for integrating such behaviors into more complex ones. A mobile robot navigation example is given where the principles of MODM are demonstrated. Simulated as well as real-world experiments show that a smooth blending of behaviors according to the principles of MODM enables coherent robot behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important class of robotic applications potentially involves multiple, cooperating robots: security or military surveillance, rescue, mining, etc. One of the main challenges in this area is effective cooperative control: how does one determine and orchestrate individual robot behaviors which result in a desired group behavior? Cognitive (planning) approaches allow for explicit coordination between robots, but suffer from high computational demands and a need for a priori, detailed world models. Purely reactive approaches such as that of Brooks are efficient, but lack a mechanism for global control and learning. Neither approach by itself provides a formalism capable of a sufficiently rapid and rich range of cooperative behaviors. Although we accept the usefulness of the reactive paradigm in building up complex behaviors from simple ones, we seek to extend and modify it in several ways. First, rather than restricting primitive behaviors to fixed input-output relationships, we include memory and learning through feedback adaptation of behaviors. Second, rather than a fixed priority of behaviors, our priorities are implicit: they vary depending on environmental stimuli. Finally, we scale this modified reactive architecture to apply not only for an individual robot, but also at the level of multiple cooperating robots: at this level, individual robots are like individual behaviors which combine to achieve a desired aggregate behavior. In this paper, we describe our proposed architecture and its current implementation. The application of particular interest to us is the control of a team of mobile robots cooperating to perform area surveillance and target acquisition and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The typical planning, design or operations problem has multiple objectives and constraints. Such problems can be solved using only autonomous agents, each specializing in a small and distinct subset of the overall objectives and constraints. No centralized control is necessary. Instead, agents collaborate by observing and modifying one another's work. Convergence to good solutions for a variety of real and academic problems has been obtained by embedding a few simple rules in each agent. The paper develops these rules and illustrates their use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications of vision based remotely operated robotic systems range from planetary exploration to hazardous waste remediation. For space applications, where communication time- lags are large, the target selection and robot positioning tasks may be performed sequentially, differing from conventional tele-robotic maneuvers. For these point-and-move systems, the desired target must be defined in the image planes of the cameras either by an operator or through image processing software. Ambiguity of the target specification will naturally lead to end-effector positioning errors. In this paper, the target specification error covariance is shown to transform linearly to the end-effector positioning error. In addition, a methodology for target specification error optimal estimation of camera view parameters of a vision based robotic system is presented. A cost function is examined based on minimizing the end-effector error covariance matrix. Simulation and experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new semi-autonomous, calibration-free system which integrates a user-friendly graphical interface, several cameras, a laser pointer mounted on a two-axis pan/tilt unit, and a six degree-of-freedom robot. The details of the system are discussed in reference to the problem of coating a workpiece of unknown geometry that is positioned arbitrarily with respect to the robot and the vision sensors. The remote user specifies the region of the workpiece that is to be coated simply by 'pointing' and 'clicking' on the region of interest as it appears in a single image on the user's computer monitor. By means of a simple and robust control strategy, the laser pointer mounted on the pan/tilt unit is autonomously actuated to a user-specified number of approximate positions in the region of the surface of interest. This is used to create compatible maneuver objectives in the participating vision sensors. Then, using the method of camera-space manipulation, the robot is controlled to make several passes across the region. Regardless of the geometry of the workpiece, the manipulated nozzle always remains perpendicular to the surface at a user- specified distance from the surface. Upon completion of the coating process, the laser pointer is again actuated to pass through a specified number of points on the new surface. This information is used to make a very precise inference of the thickness of the build-up of the coat that has been applied. If the coat is not sufficiently thick, the robot makes more passes as required. The paper also presents experimental results of the high accuracy of position and orientation control of the manipulated tool, as well as the depth inference of the surface coat applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a real-time hierarchical system that fuses data from vision and touch sensors to improve the performance of a coordinate measuring machine (CMM) used for dimensional inspection tasks. The system consists of sensory processing, world modeling, and task decomposition modules. It uses the strengths of each sensor -- the precision of the CMM scales and the analog touch probe and the global information provided by the low resolution camera -- to improve the speed and flexibility of the inspection task. In the experiment described, the vision module performs all computations in image coordinate space. The part's boundaries are extracted during an initialization process and then the probe's position is continuously updated as it scans and measures the part surface. The system fuses the estimated probe velocity and distance to the part boundary in image coordinates with the estimated velocity and probe position provided by the CMM controller. The fused information provides feedback to the monitor controller as it guides the touch probe to scan the part. We also discuss integrating information from the vision system and the probe to autonomously collect data for 2-D to 3-D calibration, and work to register computer aided design (CAD) models with images of parts in the workplace.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach to high-performance video processing that avoids the use of special purpose hardware and achieves parallelism via a distributed system of high- performance workstations. An efficient network protocol allows mobile computers or mobile robots to remotely access the distributed video processing system over low-bandwidth wireless links. Despite the lack of special purpose hardware and expensive computing equipment, the system provides exceptional performance processing data at speeds that approach real-time. The novel contribution of our approach is the ability to process video in compressed format and efficiently distribute video data to a distributed system of workstations for processing. We describe the algorithms employed to achieve these features and demonstrate our approach by showing how it can be applied to two problem domains: multiple target tracking and video archive searching. We present experimental results that show that video data can be efficiently disseminated to a network or workstations with only a small (7%) increase in transfer time. We also show that processing can be performed efficiently (20 fps) directly on the compressed video without social purpose hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article presents a method to combine data from two sensorial modalities, a stereoscopic vision based sensor and a ring of ultrasonic sensors, in a grid based framework for obstacle detection and avoidance with mobile robots. The sensors' data is combined using Dempster-Shafer theory. This theory allows the combination of multiple sensorial data sources in a way that they are mutually enhanced and validated. A connectionist grid is used to support these operations and the environment modeling. Each grid node maps a configuration in a discrete subset of the robot's configuration space. The process used to obtain obstacle presence information from the stereoscopic setup and ultrasonic sensors is explained in detail. Detected obstacles result in sets of restricted configurations. The grid dynamic behavior allows the iterative computation of a repulsive potential field, which rises in the vicinity of the restricted configurations. As new information is collected by the sensors during the robot's motion so new configurations are marked as restrict and the potential field changes accordingly. Since this process occurs in real time, the computed potential field can be used to navigate the robot avoiding the detected obstacles. Experimental results are presented to support the used sensor models, the integration procedure and the control strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Oak Ridge National Laboratory (ORNL) has demonstrated, evaluated, and deployed a telerobotic approach for the remote retrieval of hazardous and radioactive wastes from underground storage tanks. The telerobotic system, built by Spar Aerospace Ltd., is capable of dislodging and removing sludge and gravel- like wastes without endangering the human operators through contact with the environment. Working in partnership with Washington University, ORNL has implemented an Event based planner/function based sharing control (FBSC) as an integral part of their overall telerobotic architecture. These aspects of the system enable the seamless union of the human operator and an autonomous controller in such a way to emphasize safety without any loss of performance. The cooperation between ORNL, Spar, and Washington University requires an open and modular control software architecture to enable the parallel development of various components of the system. ControlShell has been used as the underlying software architecture to help meet these criteria of generality and modularity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of civil engineering an effective internal monitoring of pipes and water storage is very problematic. Normally the sensors used for the task are either fixed or manually movable. Thus they will only provide locally and temporally restricted information. As a solution an underwater robotic sensor/actuator society is presented. The system is capable of operating inside a fluid environment as a kind of distributed sensory system. The value of the system emerges from the interactions between the members. Through a communication system the society fuses information from individual members and provides a more reliable estimate of the conditions inside water systems. Tests results in a transparent demo process consisting of tanks and pipes with a volume of 700 liters are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a model for agent-based control within teleoperation environments. We illustrate their role in providing automated assistance for task viewing. The paper reviews existing approaches to viewing support, which tend to focus on augmented displays. We outline our approach to providing viewing support, based on 'visual acts.' Agent-based architectures are reviewed and their application to viewing support under the visual acts model is presented. Communication is a key requirement for agent architectures. We present a system, channels, which we are currently developing to support the implementation of the agent model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New experimental hardware for research into architectures for distributed intelligent embedded systems is proposed that will provide a wide range of communication media including non- deterministic broadcast such as Ethernet, deterministic broadcast such as CAN and processor busses such as VME. The emphasis is on large scale system integration rather than provision for individual capabilities. A prototype implementation of some of the proposed hardware and software modules is described together with their use in several research projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the complexity of the missions to planetary surfaces increases, so too does the need for autonomous operation of the rover systems. This is coupled with the power, weight and computer storage restrictions on such systems. This paper presents a multirover system that is capable of cooperative planetary surface retrieval operations such as a multiple cache recovery mission to Mars. The system employs autonomous navigation, and is coupled through a low bandwidth communication channel. We also report the results of some experimental studies in simulated multiple cache retrieval operations in planetary environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the design of robot systems to take advantage of non-linear dynamic systems models, specifically symmetry breaking phenomena, to self-organize in response to task and environment demands. Recent research in the design of robotics systems has stressed modular, adaptable systems operating under decentralized and distributed control architectures. Cooperative and emergent behavioral structures can be built on these modules by exploiting various forms of communication and negotiation strategies. We focus on the design of individual modules and their cooperative interaction. We draw on nonlinear dynamic system models of human and animal behavior to motivate issues in the design of robot modules and systems. Sonar sensing systems comprising a ring of sonar sensors are used to illustrate the ideas within a networked robotics context, where distributed sensing modules located on multiple robots can interact cooperatively to scan an environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An intelligent system (IS) senses, reasons and acts to perform its required tasks. Sensors are used to sense environmental parameters, and through the computational intelligence the system understands the situation and takes appropriate steps toward the desired performance. To deploy systems for mission- critical applications, the underlying technology should have the ability to detect the failure of the components as well as to replace faulty components with fault-free ones within a specified time window known as fault-clearance period. The presence of fault-clearance period in the perception phase of system operation results in the loss of on-line data from different sensors and the subsequent loss of valuable information about the environmental parameters. After fault- clearance, a repetition of the sensing cycle will not recover the lost data in sensing highly transient non-periodic signals. Moreover, the unpredictable repetition will create significant overhead to satisfy the stringent timing requirements of the system. A new scheme has been developed to minimize the loss of these real-time sensor's data during fault-clearance period. This scheme is based on the restoration of data through parallel sensing. The restoration processes for both dual and triple modular redundancy schemes have been developed. The effects of both hardware and software implementation of voting logic on the performance of the system and the quality of restoration have been shown. It has been shown that this scheme is capable to recover most of the lost data during fault-clearance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a scene reconstruction system that takes a pair of register range and intensity images, extracts features from both images to create feature maps, fuses these feature maps, segments the fused image, and fits surfaces to the objects in the scenes. The feature extraction locates both step edges and crease edges from the data. Edges are extracted from the intensity image using the gradient magnitude and crease edges are extracted from the range image using the gradient of the surface normal Dempster-Shafer fusion is performed on the resulting feature maps. Image segmentation is performed using a morphological watershed algorithm. Finally three-dimensional planes, spheres, and cylinders are fit to regions in the segmented scene by a least-squares optimization process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method to fuse multiple noisy range images to obtain a 3-D model of an object. This method projects each range image onto a volumetric grid that is divided into volume elements (voxels). We place a value in each voxel that represents our degree of certainty that that voxel is inside the sensed object. We determine this value by constructing a line from the voxel to the sensor's location and calculating the point that it intersects the range image. The certainty value is determined from the distance from the voxel to the range image intersection point and an estimate of the sensor's noise characteristics. The super Bayesian combination formula is used to fuse the grids created from the individual range images into an overall volumetric grid. We obtain the object model by extracting an isosurface at the value of 1/2 from the volumetric data using a variation of the marching cubes algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of extracting depth from blurring and magnification of objects or local scene has been presented. Assuming no active illumination, the images are taken at two camera positions of a small displacement, using a single standard camera with telocentric lens. Thus, the depth extraction method is simple in structure and efficient in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. This paper describes the process of various experimentations performed to validate this concept and describes the present work that has been done in that field. The experimental result shows less than 1% error for an optimal depth range. The ultimate aim of this concept would be the construction of dense 3D maps of objects and real time continuous estimation of depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While virtual reality is a powerful tool for a range of applications, it has the following two associated overheads that fundamentally limit its usefulness: (1) The creation of realistic synthetic virtual environment models is difficult and labor intensive; (2) The computing resources needed to render realistic complex environments in real time are substantial. In this paper, we describe an approach to the fully automated creation of image based virtual reality (VR) models: collections of panoramic images (cylindrical or spherical images) that illustrate an environment. Traditionally, a key bottleneck for this kind of modeling is the selection and acquisition of sample data. Our approach is based on using a small mobile robot to navigate in the environment and collect the image data of interest. A critical issue is selecting the appropriate sample locations of the modeling process: this is addressed using a computational mechanism that resembles human attention. Our objective is to select regions that differ from the surrounding environment. We do this using statistical properties of the output of an edge operator. Specifically, we guide a camera-carrying mobile robot through an environment and have it acquire data with which we construct a VR model. We then demonstrate the effectiveness of our approach using real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of autonomous mobile robots (AMRs) is sought to enable the designers to investigate various aspects of the design before the actual implementation takes place. Simulation techniques are undoubtedly enriching the design tools, by which the designer would be able to vary the design parameters as appropriate until achieving some optimal performance point. Although they are general purpose, multimedia tools, especially authoring tools, can give the AMR designer some degree of assistance in fulfilling his simulation task as fast as possible. This rapid prototyping tool is rather cost effective, and allow the designer to interactively manipulate his design in simple steps. In this paper, a multimedia environment has been constructed to enable designers to simulate AMRs in order to investigate aspects concerning their kinematics and dynamics. It is also sought that these design experiences can be gathered and categorized in a tutoring system that could be used by practitioners and students enrolled in highly technical courses such as robotics. The rich multimedia environment can assist the learner in so many ways by devising examples, suggesting solutions and design tradeoffs that have been experienced before.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The potential of the new language of Java is explored in the development of a decentralized target tracking simulator. Three particular features of the Java language prompted this initial investigation, namely that it is fully object- oriented, graphical user interfaces (GUIs) may be simply constructed, and it is Internet compatible. In the context of this paper, the full power of Java's object-oriented design is harnessed to reflect the inherent modularity of decentralized tracking systems. This enables, for example, tracks with their associated information structures, and platforms with their associated tracks, to be encapsulated within advanced data structures, or classes. An easy-to-build GUI, based on Java's abstract windowing toolkit (AWT), permits the end-user to rapidly configure a test scenario by selecting simulation variables from pop-up menus, such as the number of sensor platforms, the number of targets, and the type of target trajectory. Additionally, Java's Internet compatibility allows the simulation, in principle, to be accessed remotely. Development work on the Java tracking simulator is described, and illustrated in terms of pseudo-code and screen snapshots. We conclude that in terms of our long-range goal of constructing a simulator that can aid the investigation of decentralized systems under a range of world scenarios and operating conditions, Java shows considerable promise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Fusion Models and Variable Control Structures
This work considers the problem of causing multiple (100s) autonomous mobile robots to converge to a target and provides a 'follow-the-leader' approach to the problem. Each robot has only a limited-range sensor for sensing the target and also larger but also limited-range robot-to-robot communication capability. Because of the small amount of information available to the robots, a practical approach to improve convergence to the target is to have a robot follow the robot with the best quality of information. Specifically, each robot emits a signal that informs in-range robots what its status is. A robot has a status value of 0 if it is itself in range of the target. A robot has a status of 1 if it is not in range of the target but is in communication range of a robot that is in range of the target. A robot has a status of 2 if it is not in range of the target but is within range of another robot that has status 1, and so on. Of all the mobile robots that any given robot is in range of, it follows the one with the best status. The emergent behavior is the ant-like trails of robots following each other toward the target. If the robot is not in range of another robot that is either in range of the target or following another robot, the robot will assign -1 to its quality-of-information, and will execute an exhaustive search. The exhaustive search will continue until it encounters either the target or another robot with a nonnegative quality-of-information. The quality of information approach was extended to the case where each robot only has two-bit signals informing it of distance to in-range robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.