Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual
environments and high resolution tiled display walls. This paper outlines the design and implementation of the
CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
We present a novel immersive workstation environment that scientists can use for 3D data exploration and as their everyday
2D computer monitor. Our implementation is based on an autostereoscopic dynamic parallax barrier 2D/3D display, interactive
input devices, and a software infrastructure that allows client/server software modules to couple the workstation to
scientists' visualization applications. This paper describes the hardware construction and calibration, software components,
and a demonstration of our system in nanoscale materials science exploration.
The goal of this research is to develop a head-tracked, stern virtual reality system utilizing plasma or LCD panels. This paper describes a head-tracked barrier auto-stereographic method that is optimized for real-time interactive virtual reality systems. In this method, virtual barrier screen is created simulating the physical barrier screen, and placed in the virtual world in front of the projection plane. An off- axis perspective projection of this barrier screen, combined with the rest of the virtual world, is projected from at least two viewpoints corresponding to the eye positions of the head- tracked viewer. During the rendering process, the simulated barrier screen effectively casts shadows on the projection plane. Since the different projection points cast shadows at different angles, the different viewpoints are spatially separated on the projection plane. These spatially separated images are projected into the viewer's space at different angles by the physical barrier screen. The flexibility of this computational process allows more complicated barrier screens than the parallel opaque lines typically used in barrier strip auto-stereography. In addition this method supports the focusing and steering of images for a user's given viewpoint, and allows for very wide angles of view. This method can produce an effective panel-based auto-stereo virtual reality system.
We describe our work on the development and use of collaborative virtual environments to support planing, rehearsal, and execution of tactical operations conducted as part of mine countermeasures missions (MCM). Utilizing our VR-based visual analysis tool, Cave5D, we construct interactive virtual environments based on graphical representations of bathymetry/topography, above-surface imags, in-water objects, and environmental conditions. The data sources may include archived data stores and real-time inputs from model simulations or advanced observational platforms. The Cave5D application allows users to view, navigate, and interact with time-varying data in a fully 3D context, thus preserving necessary geospatial relationships crucial for intuitive analysis. Collaborative capabilities have been integrated into Cave5D to enable users at many distributed sites to interact in near real-time with each other and with the data in a many-to-many session. The ability to rapidly configure scenario-based missions in a shared virtual environment has the potential to change the way mission critical information is used by the MCM community.
Tele-Immersion is the combination of collaborative virtual reality and audio/video teleconferencing. With a new generation of high-speed international networks and high-end virtual reality devices spread around the world, effective trans-oceanic tele-immersive collaboration is now possible. But in order to make these shared virtual environments more convenient workspaces, a new generation of desktop display technology is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.