PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A practical color autostereoscopic display has been developed at Cambridge, and has been in operation since 1994. It provides six view directions at half VGA resolution (640 X 240 pixels) of 24-bit color at a luminance of 100 cd/m2. Each individual view direction is refreshed at standard television rates, so the display is capable of full motion animation or live 3D video. Versions with both 10 and 25 inch screen diagonal have been built. This paper describes the principles of the display, its development from an earlier monochrome version, the results of this development work, and ideas for future research. The original monochrome display, developed at Cambridge, has been in use since late 1991. It provides eight views at full VGA resolution or sixteen views at half VGA resolution. A series of views of a scene are displayed sequentially and an optical directional modulator, constructed from a liquid crystal shutting element, is synchronized with the image repetition rate to direct each image to a different zone in front of the display. The viewer's eyes thus see two different images and the head can be moved from side to side to look around objects, giving an autostereoscopic display with correct movement parallax. The use of a CRT makes for a flexible system where resolution and number of views can be easily varied. Development of the color display from the monochrome version was achieved by a color sequential system using a liquid crystal color shutter. As each view direction had to be displayed three times for the three primary colors, the maximum number of view directions was decreased to six. Full color (24-bit) images have been displayed on these six view autostereoscopic displays from a number of sources: computer generated images, digitized photographs, and live color video from a multiplexed camera also designed at Cambridge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experiments indicate that the volume of virtual space within which stereoscopic images can be seen comfortably, without eye discomfort, fusion difficulty, or inaccuracies in perceived depth, is dependent upon the eye to screen distance. This volume is maximized when the screen appears to be at infinity--that is, when it is collimated. With the image collimated, objects located within a virtual space extending from a few feet in front of the observer to infinity can be viewed comfortably. Collimation also reduces the distortion seen in stereoscopic images when viewing them from off axis locations. DTI is developing two magnified and collimated autostereoscopic displays. One uses a collimation module designed for out the window simulators to provide a very wide angle, immersive image that is potentially well suited to flight simulators and video games. Another, more compact version uses Fresnel lenses to magnify the images of a high resolution 13.8" diagonal LCD to the same angular size as a 21" display seen at 30". This variation may be more suited to desktop displays. It provides resolution, color palette, and apparent screen size equivalent to a high end CRT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combination of lenticular screens with liquid crystal displays has long been recognized as an excellent way of making autostereoscopic displays. Now, as active matrix video rate displays with full color VGA and higher resolution are becoming readily available, interest in this form of autostereoscopic imaging is increasing. An important consequence of the high resolution capability is that the `resolution/number of views' trade-off can now be made in more in favor of the `number of views' instead of the traditionally favored `resolution' side of the equation. This means that head tracking is no longer the only option for providing a more natural viewing environment. The Philips group is a manufacturer of both Active Matrix LCDs and lenticular screens. We have been able to experiment with multiview displays by combining customized lenticular screens with special liquid crystal modules. In particular, a monochrome display with four views, in which each view has a 480 X 480 resolution, has been made. Compared with two view systems in which the user has to maintain a fixed head position, the extra freedom afforded by the four views is experienced as a great improvement and most people have little difficulty avoiding the one remaining pseudoscopic head position. These multiview displays are receiving an enthusiastic response in applications areas such as medical radio diagnostics and multimedia entertainment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autostereoscopy is finding acceptance in applications where stereoscopic imaging is critical and where the use of stereo glasses and virtual reality head mounted displays is unacceptable. A wide variety of telerobotic activities are beginning to rely on autostereoscopic displays for visual input. Experimental applications have been found in industrial inspection and sign language learning/communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an eye-position tracking stereoscopic projector which employs image shifting optics. The display allows 3D images to be viewed without special glasses from any position along a lateral axis. Its image shifting device contains a plane parallel glass plate and is installed in a liquid crystal projector. Refraction produced by inclination of the glass plate shifts the optical axis of the projected image. Since the only moving part in the optics is the lightweight glass plate, the response of the image shifting device is both fast and precise enough for interactive 3D-CAD and virtual reality applications. To improve the system's interactive response, we have widened the stereoscopic viewing area by adding a device which causes the image to vibrate laterally. Also we use a new tracking algorithm which reduces the tracking error that would ordinarily be created by delay time. Experimental results confirm the success of these improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new, patented, autostereoscopic display is described that enables a viewer to view 3D TV or computer graphics images without the need to wear special glasses or other head-wear. Unlike autostereoscopic systems based upon lenticular lenses this new display does not create reverse 3D effects and maintains 3D images despite movement of the viewers head and eyes. The 50' image size is achieved by using back projection and two projectors, one to produce the image for each eye. This increases the average image brightness and allows the system to operate in a normally lit room. In order to ensure that the viewer's left eye always sees the L image, and the right eye the R image, regardless of the exact position of the viewer's head and eyes, the system uses an eye-tracking technique. A video camera within the display images the viewer's head and image analysis hardware and software locates and tracks their eyes, automatically adjusting the position of the projected L and R images to keep them in the correct alignment for optimum 3D viewing. As a result the viewers head has the freedom to move from side to side or vertically, without losing the 3D effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been working on a range of autostereoscopic display technologies based on Holographic Optical Elements (HOEs). We will discuss and demonstrate one such technology which uses a direct view LCD combined with a novel composite HOE. The properties of the HOE allow the independent establishment of viewing zones for arbitrarily chosen sets of the LCD's pixels. The viewing zones can be arbitrarily located and be of arbitrary size. These properties allows the practical realization of a high resolution, full color, real time holographic autostereoscopic display which has full 2D compatibility and allows one or more stereo viewing zones to be moved independently to track one or more mobile viewers. The technique is also free from disturbing artifacts such as flicker, `picket fence' effects, left-right reversal, brightness variations within the viewing zones and overlapping of the left and right channels. The technology shows an economic means of providing a compact, energy efficient, high performance 2D compatible autostereoscopic 3D display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review the literature on, and techniques for, the generation of left/right stereo pairs from a single lens--from 1677 to present. We attempt to answer the question: `Just how can you get two images from a single lens, anyway?'
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Disparity was a critical problem for long time perception with a conventional stereoscopic endoscope such as laparoscope etc. The problem is fatigue. The disparity is defined as for only stereo pair except blurred images of them in this paper. We produced a new type endoscope which enabled both flexible and non-flexible type using SUS guide tube. The endoscope was disparity and distortion free concerning stereo perception. However stereoscopic images on our 3D system (STEREVIC) taken by stereoscopic fiberscope have slight mixing of right and left perception. It is considered that this crosstalk occurs during image transmission in fiber bundle, because our apparatus doesn't use polarized light maintain fiber, but ordinary image fiber. Furthermore, crevice between a pair of polarizers arranged in objective lens is considered one of the reasons for the outbreak of crosstalk, so that we propose a method to reduce the influence of crosstalk in our system. The system is composed of a GRIN lens, a pair of polarizers, image fiber bundle, light guide fiber, polarizing beam splitter, a pair of CCDs, a liner subtraction circuit and a stereoscopic liquid crystal display. The diameter of the fiberscope was 1.5 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The creation of stereoscopic imagery derived from a monoscopic source by splitting the signal into two channels with an inter-channel temporal delay has a long history extending back to military reconnaissance research during WWII and to the 1920's in film-based work. Recently, several academic and commercial efforts have emerged utilizing 2D-3D conversion as a new paradigm in single-lens stereoscopy. Several examples will be discussed below, including the TransVision system developed by the author which runs on conventional Pentium/PCI platforms. Alongside these commercial developments, continuing research in vision psychophysics is shedding new light on the fundamental neural processing mechanisms underlying these technologies. Such research has shown, for example, that spatio-temporally interpolated stereo is closely related to structure-from-motion and kinetic depth phenomena, does not rely on monocular form cues and can be demonstrated even in dynamic imagery composed entirely of random noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image conversion technologies from 2D images into 3D images with the `Modified Time Difference' method are proposed. These technologies allow to convert automatic and real-time ordinary 2D images into binocular parallax 3D images according to the detected movements of objects in the images. We integrated circuits such as a movement detector, a delay time controller and a delay direction controller into a single LSI chip to make a 2D/3D conversion board compact. We put this conversion board into a television set to introduce a new type of 3D consumer television with which we all can enjoy converted 3D images originally provided by TVs, VCRs and the like. Vertical frequency of this 3D television is 120 Hz and twice as fast as the ordinary television for providing flickerless 3D image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maintenance of power generation, transmission and distribution equipment represents a major task. Our goal is to decrease the down time as well as to remove maintenance crews from potential danger, and in particular from contaminated areas. To this end, we use teleoperated equipment coupled with a supervision system that can enable the operators to be `tele-present' on the scene. We have set up a development program based on various technological components such as: CAD systems to study the feasibility of the tasks and for 3D monitoring, stereoscopic cameras, etc. Further development of virtual reality techniques should lead to high-performance interfaces or improvements on those now in existence, thereby providing the possibility of better `deep' teleoperation systems. It is very important, however, not to underestimate the technical and physiological constraints of such systems which risk introducing extra fatigue and discomfort for the operator. This article presents a number of studies and experiments conducted with a view to defining a system for stereoscopic, visual `tele-presence' which can be used for remote operation of robotic arms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation in unstructured environments is conventionally restricted to direct manual control of the robot. Under such circumstances operator performance can be affected by inadequate visual feedback from the remote site, caused by, for example, limitations in the bandwidth of the communication channel. This paper introduces ARTEMIS (Augmented Reality TEleManipulation Interface System), a new display interface for enabling local teleoperation task simulation. An important feature of the interface is that the display can be generated in the absence of a model of the remote operating site. The display consists of a stereographical model of the robot overlaid on real stereovideo images from the remote site. This stereographical robot is used to simulate manipulation with respect to objects visible in the stereovideo image, following which sequences of robot control instructions can be transmitted to the remote site. In the present system, the update rate of video images can be very low, since continuous feedback is no longer needed for direct manual control of the robot. Several features of the system are presented and its advantages discussed, together with an illustrative example of a pick-and-place task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations. Augmented Virtuality describes that class of displays that enhance the virtual experience by adding elements of the real environment. All Mixed Reality systems are limited in their capability of accurately displaying and controlled all relevant depth cues, and as a result, perceptual biases can interfere with task performance. In this paper we identify and discuss eighteen issues that pertain to Mixed Reality in general, and Augmented Reality in particular.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technology of stereoscopic imaging enables reliable online telediagnoses. Applications of telediagnosis include the fields of medicine and in general telerobotics. For allowing the participants in a telediagnosis to mark spatial parts within the stereoscopic video image, graphic tools and automatism have to be provided. The process of marking spatial parts and objects inside a stereoscopic video image is a non trivial interaction technique. The markings themselves have to be 3D elements instead of 2D markings which would lead to an alienated effect `in' the stereoscopic video image. Furthermore, one problem to be tackled here, is that the content of the stereoscopic video image is unknown. This is in contrast to 3D Virtual Reality scenes, which enable an easy 3D interaction because all the objects and their position within the 3D scene are known. The goals of our research comprised the development of new interaction paradigms and marking techniques in stereoscopic video images, as well as an investigation of input devices appropriate for this interaction task. We have implemented these interaction techniques in a test environment and integrated therefore computer graphics into stereoscopic video images. In order to evaluate the new interaction techniques a user test was carried out. The results of our research will be presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the issue of creating stereo imagery on a screen that, when viewed by naked human eyes, will be indistinguishable from the original scene as viewed through a visual accessory. In doing so we investigate effects that appear because real optical systems are not ideal. Namely, we consider optical systems that are not free from geometric aberrations. We present an analysis and confirming computational experiments of the simulations of stereoscopic optical accessories in the presence of aberrations. We describe an accessory in the framework of the Seidel-Schwarzschild theory. That means that we represent its deviation from an ideal (Gaussian) device by means of five constants. Correspondingly, we are able to simulate five fundamental types of monochromatic geometric aberrations: spherical aberration, coma, astigmatism, curvature-of-field, and distortion (barrel and pincushion). We derive and illustrate how these aberrations in stereoscopic optical systems, can lead to anomalous perception of depth, e.g., the misperception of planar surfaces as curved, or even twisted as well as to circumstances under which stereoscopic perception is destroyed. The analysis and numerical simulations also allow us to simulate the related but not identical effects that occur when lenses with aberrations are used in stereoscopic cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Vision IIITM method of parallax scanning has been successfully achieved using a moving optical element (MOE) in a single lens. Unlike the lenses in our previous custom camera systems, the MOE lenses do not move. Instead, an optical element inside the lens scans a scene in a complete circle while the lens position remains fixed. V3TM MOE lenses have been effectively applied to 35 mm motion picture and broadcast video imaging. Images shot with a MOE lens provide a strong sense of dimension, realism, and stability. They can be displayed using standard motion picture projection or broadcast television equipment without the need for special screens or glasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D closed circuit TV which produces stereoscopic vision by observing different images through each eye alternately, has been proposed. But, there are several problems, both physiological and psychological, for 3D image observation in many fields. From this prospective, we are learning personal visual characteristics for 3D recognition in the transition from 2D to 3D. We have separated the mechanism of 3D recognition into several categories, and formed some hypothesis about the personal features. These hypotheses are related to an observer's personal features, as follows: (1) consideration of the angle between the left and the right eye's line of vision and the adjustment of focus, (2) consideration of the angle of vision and the time required for fusion, (3) consideration of depth sense based on life experience, (4) consideration of 3D experience, and (5) consideration of 3D sense based on the observer's age. To establish these hypotheses, and we have analyzed the personal features of the time interval required for 3D recognition through some examinations to examinees. Examinees indicate their response for 3D recognition by pushing a button. Recently, we introduced a method for picking up the reaction of 3D recognition from examinees through their biological information, for example, analysis of pulse waves of the finger. We also bring a hypothesis, as a result of the analysis of pulse waves. (1) We can observe chaotic response when the examinee is recognizing a 2D image. (2) We can observe periodic response when the examinee is recognizing a 3D image. We are making nonlinear forecasts by getting correlation between the forecast and the biological phenomena. Deterministic nonlinear prediction are applied to the data, as a promising method of chaotic time series analysis in order to analyze the long term unpredictability, one of the fundamental characteristics of deterministic chaos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predictions of task performance based on the information required by the task, visual information acquired from the source, information transmission channel characteristics, and human information processing limitations are compared to actual performance on tasks viewed directly or remotely either monoscopically or stereoscopically, under different motion conditions. The tasks require varying amounts of information and channel capacity for proficient task completion and are based on the rapid sequential positioning task. The rapid sequential positioning task measures the time a subject takes to locate and tap an illuminated point source light target with a probe. Performance was measured using the task in a 3D and 3D plus motion configurations. The 3D plus motion configurations were given to subjects at four different movement speeds under different viewing conditions to test the effects of changing viewing bandwidth requirements. Subjects performed all tasks in a single session with data collected by computer. Data analysis involved the comparison of actual results with predictions derived from the Model Human Processor model and information theory. Results indicate that the requirements, availability, transmission, and human processing limitations of information are key components to task performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New Developments in Stereoscopic Displays and Applications
To synthesize intermediate view images between a pair of left and right view images of an object, we propose a new method which estimates disparity with discontinuity at the object contour. Our method consists of four steps: (1) Two initial disparity maps based on the right or left view images are estimated integrating the various block size correlations. (2) Ill- corresponding areas in both of the initial disparity maps are detected with `confidence measures'. Then, the disparity of the il-corresponding areas is estimated from the boundary disparity and the object contours are detected using the Snake method for both view images. (3) The disparity maps obtained above are reestimated along the points of the object contours. (4) Using the reestimated disparity maps, two intermediate view images based on both view images are synthesized and integrated into the final intermediate view image. Our method can estimate the disparity of an occluded area with discontinuity at the object contour. Experimental results show the proposed method improves the quality of the intermediate view images, especially in the region of object contours, and its MSE is reduced to 30%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes on-going research into the development of a 21/2D image modeling technique based on the extraction of relative depth information from stereoscopic x-ray images. This research was initiated in order to aid operators of security x-ray screening equipment in the interpretation of complex radiographic images. It can be shown that a stereoscopic x-ray image can be thought of as a series of depth planes or slice images which are similar in some respects to tomograms produced by computed tomography systems. Thus, if the slice images can be isolated the resulting 3D data set can be used for image reconstruction. Conceptually, the production of a 21/2D image from a stereoscopic image can be thought of as the process of replacing the physiological depth cue of binocular parallax, inherent in a stereoscopic image, with the psychological depth cues such as occlusion and rotation. Once the data is represented in this form it is envisaged that, for instance in the case of a security imaging scenario a suspicious object could be electronically unpacked. The work presented in this paper is based on images obtained from a stereoscopic folded array dual energy x-ray screening system, designed and developed by the Nottingham Trent University group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A specially designed expert system is in development for neurosurgical treatment planning. The knowledge base contains knowledge and experiences on neurosurgical treatment planning from neurosurgeon consultants, who also determine the risks of different regions in human brains. When completed, the system can simulate the decision making process of neurosurgeons to determine the safest probing path for operation. The Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) scan images for each patient are grabbed as the input. The system also allows neurosurgeons to include for any particular patient the additional information, such as how the tumor affects its neighboring functional regions, which is also important for calculating the safest probing path. It can then consider all the relevant information and find the most suitable probing path on the patient's brain. A 3D brain model is constructed for each set of the CT/MRI scan images and is displayed real-time together with the possible probing paths found. The precise risk value of each path is shown as a number between 0 and 1, together with its possible damages in text. Neurosurgeons can view more than one possible path simultaneously, and make the final decision on the selected path for operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3DTV Corporation has a variety of new products for stereoscopic computer graphics and video. There are inexpensive kits for frame sequential graphics on virtually every computer platform. A system for IBM PC compatibles has a 3DROM with games, animations, photos, tools, information, and a user configurable interface for parallel or serial ports with passthrough, status LED's and a jack for 3 new varieties of LCD shutter glasses. The StereoSpace Model 1 is a universal interface for LCD glasses with LED frequency display, VGA, parallel and serial input, sync pulse insertion for the above/below format, buttons for image adjustment and polarity reversal and ability to be computer controlled from Windows or other operating environments. Model O is another universal interface that uses optical iindicia on the screen to trigger glasses. The SpaceCam is a twin lens microprocessor controlled video camera with synced zoom and convergence. The SpaceBar offers manual or computer control of two cameras. The improved Model 200 StereoMultiplexer offers split screen modes and DB25 or BNC breakouts for making field sequential stereo with any two cameras. The SpaceStation can convert NTSC or PAL 3D video in composite, YC or RGB from nearly any format into separate R and L channels or into nearly any frequency field sequential RGB or NTSC. It can multiplex or demultiplex top/bottom, side by side or field sequential video with parallax shifts, color correction and field delays. It is finding use in perceptual research and in 3D video theaters with 1 or 2 projectors. The SpaceScanner converts field sequential stereo between PAL and NTSC. The StereoPlate Models 1 and 2 polarize light for 3D viewing with passive glasses and can fit 3 tube projectors or 17 inch monitors. For low end applications the SpaceSpex process gives full color analglyphs with inexpensive glasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the conversion of 3D video between the three world video standards of NTSC, PAL and SECAM. An overview is given of the five main methods of achieving 3D with consumer video and the principles of video standards conversion are discussed. A solution for converting field-sequential 3D video between standards is presented and a number of other advantages which the system offers are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SimulEYES VRTM, a new product for mass consumer electro-stereoscopic displays, is described. The system uses a unique indexing approach to allow content providers latitude in choosing the display mode. Board and PC manufacturers may also take advantage of the elegance of the solution by building in the SimulEYES VR capability. Hardware components consist, in part, of two custom chips which may be integrated at the board level, or employed in a VGA port dongle and control box. The liquid crystal shuttering eyewear is of a unique ergonomic design which is comfortable for people of all ages and most facial types, even when wearing eyeglasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereography is the art and science of three-dimensional vision. 3-D imaging techniques are created by science and given expression through art. Artists have for centuries attempted to give their images the effects of volume and depth. The scientific use of perspective in art roughly parallels the rise of the printing press and the scientific revolution which has transformed the world. Through their use in the mass media of commercial photography, newsprint and film, 3-D images have become a significant part of American, and international, popular culture. Simultaneously, a wide range of 3-D imaging techniques have applications in medicine, industry and science as well as entertainment and the fine arts. Stereopsis, the perception of depth, is a result of the fact that our vision is binocular. Since our eyes are separated by a distance of about two and a half inches, we perceive any object from two separate viewpoints at the same time. 3-D imaging techniques involve the mechanical reconstruction of binary stereopsis. As early as 1584 Leonardo da Vinci, one of the great scientific artists, studied the perception of depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design of a low-cost, 2D, electromagnetic tracking device for personal computers. This interface makes use of the well-known principle of electromagnetic induction to locate the position of a transmitter in an x-y plane. This device has a ring which is worn by the user on the index finger. The computer monitor is overlaid with a transparent screen equipped with tuned electromagnetic sensors. These sensors pick up the signals transmitted by the transmitter coil on the finger. The receiver circuit extracts the envelope of the received signal and digitizes it. These digitized values of x and y axis signals are read by the computer through the standard parallel port. The system software running on the computer calculates the x and y co-ordinates of the transmitter coil and displays a cursor at that location. The transmitter also has a button which can be used like a mouse button. This keypress information is also transmitted by the electromagnetic means. The device driver for this tracker replaces the standard mouse driver. Hence most applications which use a mouse can also use this tracker. Its name `Mimosa' indicates that the user need not touch the screen (Mimosa Pudica is the Latin name of a plant whose leaves wilt when touched). Presently work is on to achieve uniform sensitivity over the entire screen and reducing transmitter power consumption. In order to demonstrate its working, a small, 3D game was written. The player has to reach a pre-defined location after traversing through a maze. The paper describes the interface electronics, system software, mechanical design and the sample application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An off-axial optical system is desirable for a see-through head mounted display (HMD) with a wide field of view (FOV). Using our new off-axial paraxial theory and the fabricating methods, we have achieved to correct the off-axial aberrations of the optical systems which consists of an aspherical surfaces without rotational symmetry. In this paper, we describe 3 types of off-axial HMD optical systems. (1) The first one is a hologram for a monochromatic HMD. The hologram has been recorded the wavefront generated by computer generated hologram. The image spot size is about 15 micrometers over 9 degree FOV. (2) The second one is an aspherical mirror system for a color HMD. The designed image spot size is less than 30 micrometers over 43.5 degree FOV. The shape error of the fabricated mirror is measured by a contact probing from 1.0 micrometers to 1.9 micrometers . The maximum resolution is 36 lp/mm. (3) The last one is a prism with aspherical surfaces, 34 degree FOV and less than 15 mm thickness. The monocular HMD with only 80 g weight has been developed. The line of sight detecting device has been applied as an interactive man-machine interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This project focuses on the use of force feedback sensations to enhance user interaction with standard graphical user interface paradigms. While typical joystick and mouse devices are input-only, force feedback controllers allow physical sensations to be reflected to a user. Tasks that require users to position a cursor on a given target can be enhanced by applying physical forces to the user that aid in targeting. For example, an attractive force field implemented at the location of a graphical icon can greatly facilitate target acquisition and selection of the icon. It has been shown that force feedback can enhance a users ability to perform basic functions within graphical user interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most off-the-shelf immersive virtual environment (IVE) systems do not provide adequate depth cues to allow quick and accurate manual interactions with virtual objects. Studies of teleoperation tasks show that using stereoscopic displays improves performance, especially in situations with increased scene complexity and decreased object visibility. However, many aspects of these studies prevent generalization of the results to IVE systems. Further, the additional costs of high-resolution stereoscopic displays preclude their widespread use in business and educational settings. In this paper, the effects of various visual and auditory display enhancements were evaluated to determine whether they may replace depth peg in one location and placed it on a virtual target in another location, provided a common test situation in which to compare various enhancements. Participants wore a commercial head-mounted display and spatial trackers on the head and hand. Results indicated conditions under which visual and auditory enhancements to monocular displays resulted in performance that was not different from using stereoscopic displays. Theoretical foundations for the findings and implications of the results for other tasks in VEs are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of mass distribution of consumer VR games comes an imperative to set health and safety standards for the hardware and software used to deliver stereographic content. This is particularly important for game developers who intend to present this stereographic content via head-mounted display (HMD). The visual discomfort that is commonly reported by the user of HMD-based VR games presumably could be kept to a minimum if game developers were provided with standards for the display of stereographic imagery. In this paper, we draw upon both results of research in binocular vision and practical methods from clinical optometry to develop some technical guidelines for programming stereographic games that have the end user's comfort and safety in mind. This paper will provide generate strategies for user- centered implementation of 3D virtual worlds, as well as pictorial examples demonstrating a natural means for rendering stereographic imagery more comfortable to view in games employing first-person perspective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Excessive end-to-end latency and insufficient update rate continue to be major limitations of virtual environment (VE) system performance. Beginning from a typical baseline VE in which a spatial tracker is polled to deliver data via an RS-232 interface at each update of a single application program, we examined a series of hardware and software reconfigurations with the aim of reducing end-to-end latency and increasing update rate. These reconfigurations included: (1) multiple asynchronous UNIX processes communicating via shared memory; (2) continuous streaming rather than polled tracker operation; (3) multiple rather than single tracker instruments; and (4) higher bandwidth IEEE-488 parallel communication between tracker and computer. Starting from an average latency of 65 msec and an update rate of 20 Hz for a standard 1000 polygon test VE, our most successful implementation to date runs at 60 Hz (the maximum achievable with our graphics display hardware) with approximately 30 msec average latency. Because our equipment and architecture is based on widely available hardware (i.e., SGI computer, Polhemus Fastrak) and software (i.e., Sense8 WorldToolKit), our techniques and results are broadly applicable and easily transferable to other VE systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sound represents a largely untapped source of realism in Virtual Environments (VEs). In the real world, sound constantly surrounds us and pulls us into our world. In VEs, sound enhances the immersiveness of the simulation and provides valuable information about the environment. While there has been a growing interest in integrating sound into VE interfaces, current technology has not brought about its widespread use. This, we believe, can be attributed to the lack of proper tools for modeling and rendering the auditory world. We have been investigating some of the problems which we believe are pivotal to the widespread use of sound in VE interfaces. As a result of this work, we have developed the Virtual Audio Server (VAS). VAS is a distributed, real-time spatial sound generation server. It provides high level abstractions for modeling the auditory world and the events which occur in the world. VAS supports both sampled and synthetic sound sources and provides a device independent interface to spatialization hardware. Resource management schemes can easily be integrated into the server to manage the real-time generation of sound. We are currently investigating possible approaches to this important problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Both VR and AI have the potential to be huge productivity enhancers for engineering design, and in complementary ways. VR is a visualization tool allowing users to comprehend complex spatial relationships among many variables. AI is an exploration tool capable of finding and exploiting relationships which are very difficult to visualize, but is most effective with few variables. Using engineering design as an example, we explore how VR and AI might be integrated to yield productivity gains greater than either might alone. The typical engineering design cycle for a complex system involves multiple passes through design, simulation, and analysis phases. VR is used to visualize a design simulation while AI is used to assist in the subsequent redesign. The role of the VR subsystem is twofold; it visualizes the data for analysis and problem diagnosis such that it is easily comprehended by the engineer, and it provides a mechanism by which the engineer can describe how the design is to be improved in the next iteration. The AI subsystem then acts on the redesign descriptions to suggest design modifications. These suggestions are integrated with direct modifications from the user, and the redesigned system is simulated again. The synthesis between the VR and AI subsystems results in a closed loop design system capable of effectively undertaking complex engineering design tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technologies exist now to develop wearable computers. Anyone with a few thousand dollars can purchase commercially available components, repackage the computer, add a head mounted display and build one. Wearable computer systems still have many user interface problems associated with them from both an input and output perspective. If wearable computers are to succeed major advances will need to be made in the way the user of the computer interacts with the system. This paper will discuss the construction of two wearable computer systems and some of the user interface and usability problems associated with them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A prototype virtual environment (VE) has been developed for training a submarine officer of the desk (OOD) to perform in-harbor navigation on a surfaced submarine. The OOD, stationed on the conning tower of the vessel, is responsible for monitoring the progress of the boat as it negotiates a marked channel, as well as verifying the navigational suggestions of the below- deck piloting team. The VE system allows an OOD trainee to view a particular harbor and associated waterway through a head-mounted display, receive spoken reports from a simulated piloting team, give spoken commands to the helmsman, and receive verbal confirmation of command execution from the helm. The task analysis of in-harbor navigation, and the derivation of application requirements are briefly described. This is followed by a discussion of the implementation of the prototype. This implementation underwent a series of validation and verification assessment activities, including operational validation, data validation, and software verification of individual software modules as well as the integrated system. Validation and verification procedures are discussed with respect to the OOD application in particular, and with respect to VE applications in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses some of the practical applications, advantages and difficulties associated with the engineering applications of virtual reality. The paper tracks actual investigative work in progress on this subject at the BNR research lab in RTP, NC. This work attempts to demonstrate the actual value added to the engineering process by using existing 3D CAD data for interactive information navigation and evaluation of design concepts and products. Specifically, the work includes translation of Parametric Technology's Pro/ENGINEER models into a virtual world to evaluate potential attributes such as multiple concept exploration and product installation assessment. Other work discussed in this paper includes extensive evaluation of two new tools, VRML and SGI's/Template Graphics' WebSpace for navigation through Pro/ENGINEER models with links to supporting technical documentation and data. The benefits of using these tolls for 3D interactive navigation and exploration throughout three key phases of the physical design process is discussed in depth. The three phases are Design Concept Development, Product Design Evaluation and Product Design Networking. The predicted values added include reduced time to `concept ready', reduced prototype iterations, increased `design readiness' and shorter manufacturing introduction cycles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes new ways of using textures to substitute for complex geometric models. A stereo texture is a stereo pair of images mapped onto geometry and presented in a stereo display. The viewer sees the stereo pair and can thus perceive depth information in the textured image. This technique can be used to replace large parts of a complex model with simple base geometry and a stereo texture. The stereo textures can replace the scene beyond a frame or portal. If the stereo texture is placed some distance behind the frame, the viewer gets motion parallax, between the frame and the scene. The textures may also contain information in association with the image for tasks such as picking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.