PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Two stereoscopic systems are described which permits the observation of a high resolution image by several persons simultaneously and suited to mass production. One is a time- parallel method and another is a time-interlaced method. In the time-parallel stereoscopic display system, by mechanically driving several video projectors according to the observers' eye position, the images projected on a large format convex lenses are varied, the left image rays continuously entering the observers' left eyes and vice versa. In the time-interlaced stereoscopic display system, the image output screen is formed by a transparent type color liquid crystal plate with a large format lens. The lens is arranged so that an image of viewers is projected to the plane of a black-and-white CRT which is positioned behind as a back light of the system. To view the stereo image on the color liquid crystal plate, the alternating left- and right-eye perspectives must be synchronized with an infrared lightening system and the imaging of the viewers on the CRT that the back light distributes the light to the left eyes when the left-eye view is displayed on the color liquid crystal plate and vice versa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New products are described for real-time viewing of flicker-free stereoscopic video and the multiplexing/demultiplexing of two channels of picture information within a standard video channel. The technique used in the new products is superior to prior commercially available stereoplexing approaches, eliminating the need for a line-doubling scan converter, and increasing vertical resolution while decreasing the stairstepping of diagonal lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a newly developed volume scanning display in which a user can actually put his/her hands into the 3D image and manipulate it without unencumbering devices, such as goggles, glasses or gloves. This performance has never been achieved by conventional display systems. The display is composed of a volume scanning LED panel for creating an autostereoscopic image, an optical relay system for translating the image into another free space, and a wireless 3D mouse for a user to interact with the image. The display has been applied to shape modeling, physical simulation data visualization, and medical data imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a new realistic 3-D microsurface visualization technique utilizing optical phase-shifting interferometry (PSI). First, we measure the surface topography directly by determining the phase of the wavefront reflected from the surface of the object. The phase information is obtained by shifting the phase of one beam of the interferometer by a known amount and measuring the intensity of the interferometer for many different phase shifts. A phase difference map between the reference and object wavefronts is then calculated from the measured intensities. The vertical resolution is on the order of a few Angstroms. Second, we extend phase-shifting interferometry to a measurement of surface reflectivity. The measured reflectivity is not affected by any variations associated with the light source across the entire illumination field. Third, both the measured surface height data and the reflectivity images are fed into a workstation where advanced computer graphics algorithms are applied. The surface height data are used to generate the 3-D surface profile, which is then shaded by the reflectivity image, resulting in a realistic 3-D image. We will present the theoretical analysis, system setup, experimental measurements, and examples of realistic 3-D microscopic surface images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we outline the current status of our telecommunications based 3-D imaging systems work. Demonstrations for each of the key applications areas are being, or have been constructed. The conditions used for scene recording and replay are outlined as is the model used and the constraints imposed. Initial user feedback on the demonstrations is encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the origins, characteristics and effects of image distortions in stereoscopic video systems. The geometry of stereoscopic camera and display systems is presented first. This is followed by the analysis and diagrammatic presentation of various image distortions such as depth plane curvature, depth non-linearity, depth and size magnification, shearing distortion and keystone distortion. The variation of system parameters is also analyzed with the help of plots of image geometry to show their effects on image distortions. The converged (toed-in) and parallel camera configurations are compared and the amount of vertical parallax induced by lens distortion and keystone distortion are discussed. The range of acceptable vertical parallax and the convergence/accommodation limitations on depth range are also discussed. It is shown that a number of these distortions can be eliminated by the appropriate choice of camera and display systems parameters. There are some image distortions, however, which cannot be avoided due to the nature of human vision and limitations of current stereoscopic video display techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In teleoperation, non-orthoscopic views are often obtained by changing camera distance, lens focal length and intercamera separation to settings that deviate from those required to produce orthoscopic views. Distortions caused by this distant perspective can have an impact on the perceptions and the performance of tasks in the work space. This study uses the rapid sequential positioning task (RSP) to investigate differences in performance using stereoscopic and monoscopic remote views that are either orthoscopic in terms of camera/lens configuration or are obtained from cameras located at four times their orthoscopic position. At this distant perspective orthoscopic image size is maintained by adjusting lens focal length while comparable disparities are maintained by adjusting the intercamera separation. Although in the distant perspective (or non- orthoscopic) objects on the horopter plane are the same size as in the orthoscopic view, object ahead or behind the horopter plane are not. Time scores were recorded from four subjects performing the RSP task under four viewing conditions: monoscopic/orthoscopic, monoscopic/non-orthoscopic, stereoscopic/orthoscopic, stereo/non-orthoscopic. A two by two ANOVA statistical analysis was performed on the data. Results of this study did not reveal a degradation in performance when moving away from the orthoscopic view to the distant perspective for both monoscopic and stereoscopic view, although stereo was significantly superior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines whether the potential benefits outweigh the expected costs of using stereoscopic video (SV) instead of monoscopic video (MV) for hazardous materials teleoperation. The first part presents the various benefits ascribed to SV found in previous laboratory research, and outlines the expected costs. The second part presents two experiments conducted using trained telerobot operators of a variety of skill levels, seeking confirmation that the expected benefits of SV will apply to real world field operations. There is a brief discussion of the relevance of laboratory-based experimental results to real world teleoperation, and an approach is suggested that stresses the importance of expert evaluation as a more robust and powerful analytic tool than standard laboratory techniques and statistics in field trials. The first experiment, conducted under field-like conditions with typical operators, demonstrated that operators strongly prefer SV, considering it significantly better for most teleoperation tasks, and rated SV to be more useful and more comfortable to use than MV. The results of the second experiment, conducted under more controlled conditions with expert operators, confirmed the results of the first, and demonstrated significant performance advantages of SV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In flying air intercepts, a fighter pilot must plan most tactical maneuvers well before acquiring visual contact. Success depends on one's ability to create an accurate mental model of dynamic 3D spatial relationships from 2D information displays. This paper describes an Air Force training program for visualizing large- scale dynamic spatial relationships. It employs a low-cost, portable system in which the helmet-mounted stereoscopic display reveals the unobservable spatial relationships in a virtual world. We also describe recent research which evaluated the training effectiveness of this interactive three-dimensional display technology. Three display formats have been tested for their impact on the pilot's ability to encode, retain and recall functionally relevant spatial information: (1) a set of 2D orthographic plan views, (2) a flat panel 3D perspective rendering and, (3) the 3D virtual environment. Trainees flew specified air intercepts and reviewed the flights in one of the display formats. Experts' trajectories were provided for comparison. After training, flight performance was tested on a new set of scenarios. Differences in pilots' performances under the three formats suggest how virtual environment displays can aid people learning to visualize 3D spatial relationships from 2D information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the use of stereo in creating and manipulating ration Bezier tensor product surfaces. The application uses operating system provided 2D cursors and menuing which were found to be inferior to their stereo counterparts. Stereo manipulation was accomplished using a 3-button mouse which was determined to be an adequate input device. The interface is described and possible changes and additions are suggested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stereoscopic drawing program is described which permits the user to display and manipulate quadric surfaces. The quadric surfaces are the 3-dimensional relatives of the ellipse, parabola and hyperbola and include ellipsoids, hyperboloids of one sheet, hyperboloids of two sheets, elliptic cones, elliptic paraboloids, and hyperbolic paraboloids. These surfaces have both implicit and parametric representations. A 3-button mouse is used to create and manipulate the surfaces. Rubber-banding can be used to define the surface and three dimensional transformations of the surface including scaling, rotation and translation are defined by mouse movement. A goal is to maintain a consistent and intuitive method of control for these surfaces, using techniques similar to those used in 2-dimensional drawing systems. The tessellation, color and shading characteristics of a surface can be determined interactively by the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo vision is the sensation of visual depth that results from the neural response to dissimilarities in the images seen by the two retinas. Psychophysical studies have strongly suggested that stereo disparity exists in the human visual system, and forms the basis for three-dimensional depth perception in the brain. In the past, this subject has been studied by many scientists using various approaches. Recently, several vision biologists have proposed a human binocular neural interaction model. We derive the mathematical equations for the model based on their discovery, and simulate this multilayer neural network model with real binocular images on a digital computer. This model proves to be a novel approach that provides binocular depth performance close to the biological data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In human binocular vision, the two retinal images are unified into a single image with perception of depth through a mechanism called 'binocular sensory fusion'. The term cyclopean vision refers to the unified visual scene of the world obtained from fusion of the images projected to the two eyes. This paper describes an algorithm which simulates the fusion process for depth perception. The disparity information of the entire image is obtained by a convolution operation followed by a local maxima detection. The computation burden, and therefore, the processing time, is largely reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a fast and robust artificial neural network algorithm for solving the stereo correspondence problem in binocular vision. In this algorithm, the stereo correspondence problem is modelled as a cost minimization problem where the cost is the value of matching function between the edge pixels along the same epipolar line. A multiple-constraint energy minimization neural network is implemented for this matching process. This algorithm differs from previous works in that it integrates ordering, and geometry constraints in addition to uniqueness, continuity, and epipolar line constraint into a neural network implementation. The processing procedures are similar to that of human vision process. The edge pixels are divided into different clusters according to their orientation and contrast polarity. The matching is performed only between the edge pixels in the same clusters and at the same epipolar line. By following the epipolar line, the ordering constraint (the left-right relation between pixels) can be specified easily without building extra relational graph as in the earlier works. The algorithm thus assigns artificial neurons which follow the same order of the pixels along an epipolar line to represent the matching candidate pairs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A surprising simple 2-D to 3-D visual display process utilizes a mechanical device to integrate four independent 2-D to 3-D visual processes. The quality of the resulting 3-D stereoscopic display approaches that of a "View Master" when ordinary color pictures from a magazine are viewed. The mechanical device which allows the independent processes to be applied is called the "Three-Dimensional Viewing Glasses" {3-DVG, U. S. Patent 4,810,057). An individual must first learn how to use the device; approximately four out of five persons with normal eyesight will experience the effect. Brief exposure to the device can lead to a heightened sense of depth perception when viewing subsequent pictures without use of the device. Familiarity with the process allows an individual to use only their fingers to generate a surprising good stereoscopic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low cost stereoscopic virtual reality hardware interfacing with nearly any computer and stereoscopic software running on any PC is described. Both are user configurable for serial or parallel ports. Stereo modeling, rendering, and interaction via gloves or 6D mice are provided. Low cost LCD Visors and external interfaces represent a breakthrough in convenience and price/performance. A complete system with software, Visor, interface and Power Glove is under $DOL500. StereoDrivers will interface with any system giving video sync (e.g., G of RGB). PC3D will access any standard serial port, while PCVR works with serial or parallel ports and glove devices. Model RF Visors detect magnetic fields and require no connection to the system. PGSI is a microprocessor control for the Power Glove and Visors. All interfaces will operate to 120 Hz with Model G Visors. The SpaceStations are demultiplexing, field doubling devices which convert field sequential video or graphics for stereo display with dual video projection or dual LCD SpaceHelmets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic display systems, with both left and right eye fields appearing on a single display surface, have recently been used in combination with head-tracking technology. The result has been a new Virtual Reality paradigm, where the display surface represents a window into a virtual world. Within this paradigm, the user may intuitively alter the rendered perspective by changing his head position or orientation. This paper outlines a methodology for generating real-time projection transformations that apply to single-display stereoscopic viewing systems with head-tracking. Numerous transformations may be applied to a scene in response to head-tracking data. These include scene rotation, scene translation, field of view angle changes, variation of the stereoscopic interaxial separation, parallax axis rotation, and stretching of the displayed projections. I will describe the implementation of these operations. In some situations, software should respond to head-tracked input in an exaggerated manner. In other cases, graphics transformations should be moderated. Certain head-tracked input may not be useful at all. The overall goal is for the user to experience the three-dimensional scene in a convenient, natural, and intuitive manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual environments involve the user in an interactive three- dimensional computer generated environment. The methods of interaction typically involve direct manipulation of virtual objects via three-dimensional trackers. The tracking signal may be degraded in various ways, impacting the ability of the user to perform various tasks. This presentation will address the impact of two types of degradation in the tracking signal, lag (transport delay) and low frame rate. These degradations are common in existing virtual reality systems. While the impact of lag on human performance is comparatively well studied, the impact of low frame rate has not been widely studied. The impact of lag and low frame rate on two tasks will be compared and studied: Pursuit tracking and placing. The tasks will be studied in a two-dimensional context, eliminating ambiguities due to three-dimensional perception and display. Simple conclusions will be drawn that can serve as guidelines for developers designing interactive virtual environments. The relationship between these conclusions and theories of human performance will be briefly addressed.110
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the ideal orthostereoscopic viewing system the geometric relationship between the manipulator arm and cameras is designed to product a close correspondence between the operator's actual and imaged hand-to-eye position. This correspondence often cannot be maintained because of the physical design constraints of manipulator, cameras, or mounting structure. Cameras mounted in a non-corresponding position, in relation to the operator's hand- to-eye position, create a visual-motor mismatch. In this study the rapid sequential positioning (RSP) task is used to measure manipulator performance under two levels of visual-motor correspondence. Performance was measured by (1) taking a pure perceptual measure, (2) taking total time to complete a task, (3) measuring various types of errors, and (4) number of perfect and near perfect task completions. One group viewed a scene in which there was a visual-motor correspondence and the other group viewed a noncorresponding scene, in which the cameras were shifted 30 degrees clockwise from the orthoscopic position. Each group performs the RSP task under four visual conditions. Those four visual conditions are monoscopic stationary, monoscopic with motion parallax, stereo stationary, and stereo with motion parallax. The performance of groups under different views was compared to determine the effect of visual-motor noncorrespondence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During 1992, Dimension Technologies Inc. (DTI) completed several development projects designed to enhance and improve its autostereoscopic display technologies. These include: The introduction and upgrading of a very bright 640 X 480 full color autostereoscopic display with user controlled selection of 3D or 2D viewing modes. Development of an electronic head tracking system that allows a user to observe stereo from across a wide area without head position restrictions. Development of a 640 X 480 autostereoscopic color display that allows each of the observers eyes to sell all the pixels on the LCD. Initial development work on a compact display designed to provide look around on high resolution images using multiple perspective views.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.