This paper is going to present a summary of our technical experience with the INPRES System -- an augmented reality system based upon a tracked see-through head-mounted display. With INPRES a complete augmented reality solution has been developed that has crucial advantages when compared with previous navigation systems. Using these techniques the surgeon does not need to turn his head from the patient to the computer monitor and vice versa. The system's purpose is to display virtual objects, e.g. cutting trajectories, tumours and risk-areas from computer-based surgical planning systems directly in the surgical site. The INPRES system was evaluated in several patient experiments in craniofacial surgery at the Department of Oral and Maxillofacial Surgery/University of Heidelberg. We will discuss the technical advantages as well as the limitations of INPRES and present two strategies as a result. On the one hand we will improve the existing and successful INPRES system with new hardware and a new calibration method to compensate for the stated disadvantage. On the other hand we will focus on miniaturized augmented reality systems and present a new concept based on fibre optics. This new system should be easily adaptable at surgical instruments and capable of projecting small structures. It consists of a source of light, a miniature TFT display, a fibre optic cable and a tool grip. Compared to established projection systems it has the capability of projecting into areas that are only accessible by a narrow path. No wide surgical exposure of the region is necessary for the use of augmented reality.
Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at.
To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.
A fundamental decision in building augmented reality (AR) systems is how to accomplish the combining of the real and virtual worlds. Nowadays this key-question boils down to the two alternatives video-see-through (VST) vs. optical-see-through (OST). Both systems have advantages and disadvantages in areas like production-simplicity, resolution, flexibility in composition strategies, field of view etc. To provide additional decision criteria for high dexterity, accuracy tasks and subjective user-acceptance a gaming environment was programmed that allowed good evaluation of hand-eye coordination, and that was inspired by the Star Wars movies. During an experimentation session with more than thirty participants a preference for optical-see-through glasses in conjunction with infra-red-tracking was found. Especially the high-computational demand for video-capture, processing and the resulting drop in frame rate emerged as a key-weakness of the VST-system.
In this paper we present recent developments and pre-clinical validation results of our approach for augmented reality (AR, for short) in craniofacial surgery. A commercial Sony Glasstron display is used for optical see-through overlay of surgical planning and simulation results with a patient inside the operation room (OR). For the tracking of the glasses, of the patient and of various medical instruments an NDI Polaris system is used as standard solution. A complementary inside-out navigation approach has been realized with a panoramic camera. This device is mounted on the head of the surgeon for tracking of fiducials placed on the walls of the OR. Further tasks described include the calibration of the head-mounted display (HMD), the registration of virtual objects with the real world and the detection of occlusions in the object overlay with help of two miniature CCD cameras. The evaluation of our work took place in the laboratory environment and showed promising results. Future work will concentrate on the optimization of the technical features of the prototype and on the development of a system for everyday clinical use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.