Two technologies were combined to demonstrate a compact, foveated occlusive Mixed Reality (MR) headset. Waveguide displays were used to create the central, high-resolution Field of View (FOV), and a Heterogeneous Multi-Lens Array (HMLA) based display formed the periphery. A HoloLens 2, employing transparent waveguide displays, was the foveated display, covering a horizontal FOV of 43◦ with a resolution of 47 Pixels Per Degree (ppd). Each peripheral display used a custom-made HMLA and an off-the-shelf OLED microdisplay, with each lens of the HMLA array acting as a small VR display. Collectively, the array lenses tiled both the eye box and FOV to create a non-rectangular FOV of 26◦ × 26◦ and a large eye box with a resolution of 5 pixels per degree (ppd). Since the waveguide headset has see-through optics, the two peripheral displays were attached in front of its visor, so the two images were merged. The peripheral display had a one-degree overlap with the foveated display, making the total FOV of the hybrid headset 93◦. The system’s optics were less than 5mm thick, though the experimental setup was thicker due to optomechanical and industrial design constraints. The software of the peripheral display was integrated with the central display, making it a cohesive experience. The latency between the (faster) Central and the (slower) peripheral displays were compensated by using predictive algorithms for the head movement. A qualitative user study at the end of the project verified that the experience was improved and showed that the neck strain was significantly reduced and comfort increased.
Metaverse, for better or worse, is the buzz word of the moment. But, what will the metaverse actually look like? What is needed to make it a strong and effective ecosystem—besides hardware, content and software, connectivity (5G), security, understanding of cultural and ethical impacts, and applications for enterprise. What is a myth about the metaverse, and what will be the reality. Join this panel and hear what these leaders have to say.
Augmented, mixed and virtual Reality (AR/MR) headsets as well as smart glasses have the potential to revolutionize how we work, communicate, travel, learn, teach, shop and get entertained [1],[2]. An MR headset places virtual content into the user’s view of the real world, either via an optical see-through mode (AR/MR) or video-pass-through mode (VR/MR). Today, the return on investment for MR use has been demonstrated widely for the enterprise and defense sectors, but only partially for consumer. In order to meet the high market expectations especially for the upcoming consumer field, several challenges must be addressed, in a variety of fields: optics, display, imaging, sensing, rendering, and MR content. Across each of these fields, artificial intelligence and deep learning techniques can offer important gains in optimization of key performance criteria, allowing for systems to be more performant on more constrained resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.