PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12024, including the Title Page, Copyright information, Table of Contents and Conference Committee list.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the market interest in augmented reality (AR) displays increases, research on compact and lightweight optical system design using holographic optical elements (HOE) is also actively conducted. HOE plays an essential role in augmented reality optics as an image combiner that provides users with combined real and virtual scenes due to its advantages of transparency and high selectivity of light. However, an optical system using HOE has an issue in which aberrations such as astigmatism occur. Compensating the aberrations and securing wide eye-box and viewing angle to provide users with a convenient and immersive viewing experience remain a challenge for AR optics using HOEs. This paper presents studies conducted to correct aberrations, expand the eye-box and broaden the viewing angle for the AR optical systems using HOEs, such as head-up displays (HUDs) and near-eye displays. In the case of the HUD, we propose a method to correct the aberrations. In the proposed method, two freeform mirror shapes are designed using a commercial raytracing software to minimize the aberrations by the HOE attached to a flat windshield. Combined with the image pre-compensation, the proposed system provides aberration-free consistent images over its entire eye-box. In the case of near-eye displays, the eye-box expansion technique using multiplexed HOE is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a fast hologram generation method based on optimal segmentation of sub-computer-generated hologram is proposed. When using the point light source (PLS) model for computer-generated hologram (CGH), the 3D object is discretized into individual object points, which are assumed to independent ideal sources. Then, the sub-CGH can be obtained by calculating the diffraction light field distribution of each ideal PLS, and the final hologram can be generated by further calculating of the sub-CGHs. In the proposed method, the contribution of any pixel in the sub-CGH generated by each ideal PLS to the final reconstructed image is calculated, and each sub-CGH is divided into the optimized diffraction areas (ODAs) and the invalid diffraction areas. Then, only the ODAs of each sub-CGH is pre-calculated and saved for further generating the final hologram. Because of the size of the ODAs are much smaller than that of the sub- CGHs, the hologram generation speed is greatly improved. The proposed method can be used to optimize most of the PLS model-based hologram generation methods. The proposed method is used to optimize the novel look-at-table (NLUT) method and wavefront recording plane (WRP) method, and both of them achieve good acceleration effect. In the optimization of the NLUT method, the sub-CGH database of the traditional NLUT method is replaced by the ODAs database. The final hologram can be generated by calling and superimposing all the ODAs. In the optimization of the WRP method, the WRP is calculated by using the ODAs instead of the sub-CGHs. Then, the final hologram is generated by using the Fresnel diffraction from the WRP to the holographic plane. With the proposed method, the calculation speed of the NLUT method and the WRP method can be significantly improved and the quality of the reconstructed image is not affected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-Generated Holography (CGH) promises to deliver genuine, high-quality visuals at any depth. We argue that combining CGH and perceptually guided graphics can soon lead to practical holographic display systems that deliver perceptually realistic images. We propose a new CGH method called metameric varifocal holograms. Our CGH method generates images only at a user’s focus plane while displayed images are statistically correct and indistinguishable from actual targets across peripheral vision (metamers). Thus, a user observing our holograms is set to perceive a high quality visual at their gaze location. At the same time, the integrity of the image follows a statistically correct trend in the remaining peripheral parts. We demonstrate our differentiable CGH optimization pipeline on modern GPUs, and we support our findings with a display prototype. Our method will pave the way towards realistic visuals free from classical CGH problems, such as speckle noise or poor visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We analyze the light efficiency of a virtual reality (VR) system from display panel to eyebox, and brightness non uniformity caused by the imaging process of the VR lens. Two types of light engines: OLED and LCD are evaluated. For an OLED panel, we optimize the microcavity structure to suppress the image non-uniformity, while keeping a high optical efficiency. For LCD, we propose a 2D patterned prism film to locally modulate the radiation pattern for optimizing the light collection efficiency while minimizing the vignetting effect. The proposed optimization method provides valuable guidelines for designing next-generation display devices for VR headsets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous cars should communicate by exterior displays with other road users to increase safety. Examples are the visualization of the driving mode and a pedestrian waiting at a crosswalk being informed that the autonomous car will stop. Such displays must be sunlight readable and smoothly integrated into the front and rear of a car. We prototyped a full-scale mock-up to evaluate RGB LEDs and e-paper regarding optical performance. The display size of 80 cm x 40 cm was chosen accordingEN 12966. We evaluated the reasonable LED pixel pitch as 6 mm, which is about half of the minimum for variable traffic signs. RGB LEDs (~ 100 W/m²), black/white and color e-paper (zero power) were measured with simulated ambient light and judged by subjects regarding legibility, color inversion and color perception. The measurements show a large contrast ratio and gamut for the LED displays. The reflectance of the e-paper was 40% (color) and 50% (monochrome), it's a contrast ratio of about 10:1. The reflectance shows a strong dependency of specular included and excluded measurements. However, the color gamut of e-paper was measured as being small and rated as poor. RGB LEDs with attention-grabbing blinking and high luminance (5,000 cd/m²) as well as high contrast ratio and large gamut were rated best. The best exterior application for e-paper is a digital license plate. The foreground of the e-paper should be black or colored, and the background white. However, this is challenging for car design
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light field displays offer an unparalleled visual 3D experience to viewers but suffer from low resolution, low number of views and small field of view (FOV). We propose a light field display design based on a laser-lit backlight, a liquid crystal panel and a beam-splitting diffractive layer to overcome these shortcomings. The backlight developed by VitreaLab contains an integrated photonic circuit embedded in glass which distributes laser light over an array of millions of tightly confined single mode laser beams that illuminate, one by one, the subpixels of the liquid crystal panel. Each beam is split into multiple beams by the diffractive layer, directing light into precise viewing positions. The eyes of the (single-) viewer are dynamically tracked and only the correct viewing positions are displayed. We envisage a laser light field display with extremely low cross-talk (X < 0.1%), high view zone brightness uniformity (K > 95%) and smooth motion parallax with N > 200 views, all while using a low trade-off factor (4×) between resolution and number of views. This means, that each view has a 4× lower resolution than the base panel, a much better trade-off than in conventional light field displays, where this factor can reach 100×. Furthermore, a variable viewing distance is supported for a wide field of view (FOV> 100◦ ).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
OLED microdisplays have entered several professional and consumer near-to-eye visualization devices, such as VR/AR, assisted-reality or electronic viewfinder. Head-, helmet- or eyeglass frame-mounted displays, smart glasses or visors provide user information for human-machine interaction, situational awareness, personal safety, remote support or training. Display architecture and parameters, such as screen size, pixel density, resolution, color range and auxiliary functions can be varied to achieve high-resolution extended full-HD for VR/AR or ultra low-power options for long battery life in true wearables. This report is focusing on the design and characteristics of newly developed ultra-low power and slim form factor OLED microdisplay devices, featuring <0.2” screen diagonal, QVGA resolution at pixel density <2150ppi and monochrome as well as color versions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality (VR) systems bring fantastic immersive experiences to users in multiple fields. However, the performance of VR displays is still troubled by several factors, including inadequate resolution, noticeable chromatic aberration, and low optical efficiency. Pancharatnam-Berry phase optical element (PBOE) exhibits several advantages, such as high efficiency, simple fabrication process, compact, and lightweight, which is an excellent candidate for VR systems. We have demonstrated that by using three kinds of PBOEs, the above-mentioned problems can be solved satisfactorily. The first PBOE is PB grating/deflector (PBD), which can deflect the left-handed and the right-handed circularly polarized beams to two opposite directions. Therefore, if we insert a PBD to the VR system and carefully design the deflection angle, it can optically separate each display pixel into two virtual pixels and superimpose them to obtain a higher pixel density. In this way, the pixel per inch (PPI) of the original display can be doubled. The second PBOE is PB lens (PBL). As one kind of diffractive optical lenses, it has an opposite chromatic dispersion to that of a refractive lens. When a PBL with an appropriate focal length is hybridized with a refractive Fresnel lens, the system’s chromatic aberration can be significantly reduced. The third PBOE is multi-domain PB lens. The effective focal length of each domain can be customized independently. This multi-domain PBL can function as a diffractive deflection film in the VR system. If such a diffractive deflection film is cooperated with a directional backlight, the etendue wasting can be reduced prominently, and more than doubled optical efficiency can be achieved in both Fresnel and “Pancake” VR systems. These ultrathin PBOEs will find promising applications in future VR systems
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High radiance red-emitting light sources are required for several laser applications such as flying-spot-displays or high resolution microscopy. As many of such applications are moving out of the lab into industrial environment there is a high demand for small sized, efficient and reliable laser sources, for which semiconductor lasers are preferred. The red-emitting tapered diode lasers presented here, emit up to 1 W of optical power with a nearly diffraction limited beam at 635 nm. A preliminary lifetime test yielded more than 2000 h at a power level of 500 mW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality (AR) has attracted great attention from academia and industry for its potential applications in diversified fields. In order to eliminate the accommodation-vergence conflict in AR display systems, which causes 3D visual fatigue after prolonged use, several approaches have been proposed. This paper presented three types of true three-dimensional (3D) AR display techniques. The first is multi-plane volumetric display based on liquid crystal (LC) devices. We utilized fast switching polymer stabilized LC (PSLC) scattering films and polarization selective cholesteric LC (CLC) reflective films, respectively, realizing magnified 3D images augmented on the real world. The second is holographic display based on two holographic optical elements (HOEs), functioning as an optical combiner, an ocular lens and a beam expander simultaneously. For the third technique, we proposed two super-multi-view display approaches based on polarizing glasses and geometry phase optical elements (GPOEs), respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
his report proposes a three-dimensional/two-dimensional switchable augmented-reality display system using a liquid crystalline lens array and an electrical polarizer. A depth camera that is connected to the proposed augmented-reality display system acquires the three-dimensional or two-dimensional information of the real objects. Here, the dual function liquid-crystalline lens array is switched its function according to the polarizing directions of an electrical polarizer. The proposed system's overall procedure is as follows: the depth camera captures the depth/color, or only color image according to the switcher of a polarizer, and the three-dimensional or two-dimensional images are displayed separately on the augmented-reality display system. It gives an opportunity that three-dimensional and two-dimensional modes can be switched automatically. In the two-dimensional mode, the captured color image of a real object is displayed directly. In the three-dimensional mode, the elemental image array is generated from the depth and color images and reconstructed as a three-dimensional image by the liquid-crystalline microlens array of a proposed augmented-reality display system. Even the proposed system cannot be implemented the real-time display in the three dimensional mode, the direction-inversed computation method generates the elemental image arrays of the real object within a possible short time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality (AR) devices such as head-up displays (HUDs) burst into our lives, especially in the automotive industry. AR HUDs should display 3D images within a wide field of view (FoV) to bring in full immersion. However, HUDs with a wide FoV built on conventional mirror-based architecture occupy significant dashboard space and cause overheating of the display. These factors limit the integration of such wide FoV HUDs into vehicles. Instead, we propose AR HUD with a wide FoV based on a thin waveguide. A key feature of our display is the ability to deliver 3D virtual images while maintaining a small system volume. Our approach to combining both benefits is based on integrating novel units into a pupil-replication waveguide. First, a multi-view picture generation unit (MV-PGU) creates autostereoscopic 3D content within the same FoV. Then, the content is transmitted through the waveguide in a conventional pupil replication manner. Finally, a thin optical module, which we call the multi-view eyebox formation unit (MV-EFU), separates the images for the corresponding views based on the distinctive parameters. Moreover, we investigate the possibility to extend FoV by choosing the optimal parameters of the invented units. We validate our concept by raytracing simulation of a developed full-color display with an FoV of 20° × 7°. Additionally, we assemble a prototype with reduced display specifications to verify the approach experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mid-air display (MAD) technology is attractive to practitioners nowadays. The interest is due to the potential application in consumer products with embedded floating image displays, like smartphones, smartwatches and dock-stations, and as a part of new holographic user interfaces for safe and contactless control. Some of the problems to solve on the way to compact and light efficient MAD include small field of view, small image size, low image resolution, low image contrast, absence of image magnification, low perceived sense of depth, etc. In order to overcome these challenges, the authors propose a MAD based on a DMD pico-projector and a DOE waveguide with a positive Fresnel lens, placed near the out coupling aperture of the DOE waveguide. The developed MAD forms a real image with a positive relief from the display surface so that the viewer perceives this image floating in front of it, at the back focal plane of the Fresnel lens. For mid air image with ≥ 1-inch diagonal with 57 mm image relief horizontal field of view was 35 degrees, with image brightness 100 cd/m2 . The proposed mid-air image display has a compact form factor with dimensions 100 mm × 50 mm × 3 mm, without dimensions of the DMD pico-projector. It can be used in consumer products to provide a new kind of experience including contactless holographic user interaction
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have previously proposed the retinal projection type super multi-view head-mounted display (HMD) which provides natural 3D images for the observer. This previous HMD induces the accommodation of the human eye by using the super multi-view in 3D display technology and the observer can view 3D images. However, there is a problem that the depth range of 3D images by the accommodation is limited to about 2 meters. To overcome this problem, we propose the binocular stereoscopic HMD pairing the previously proposed HMDs. To pairing HMDs, the convergence, one of the factors of 3D vision, is activated and the depth range can be expanded. The proposed HMD consists of holographic optical elements and DMDs using as high-speed display devises and optical shutters. The accommodation is induced by synchronizing the optical shutter and the display device and projecting different parallax images onto the retina by time multiplexing. The convergence of the eye is also induced because the images are projected onto a different position of the retina of each eye. In order to verify expanding the depth range with the proposed HMD, we made a prototype 3D-HMD on an optical bench. As a result of experiments using the prototype system, we confirmed that the proposed HMD can expand the depth range. Since the proposed HMD uses more factors of 3D vision than the previous HMD, the observer can view more distant 3D image. In addition, the accommodation-convergence conflict, the problem of binocular stereoscopic HMD, is solved with the proposed HMD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tablets and the smartphones are commonly used in both portrait and landscape modes. In order to apply 3D displays in those devices, the angle of the parallax barrier needs to be matched correspond to the screen orientation. Thus, the display configuration becomes complicated since we need to use the active parallax barrier switchable between two barrier patterns previously. Therefore, in order to simplify the configuration of 3D tablets or 3D smartphones, we propose the 3D display with the fixed parallax barrier that enables the observation of high-quality 3D images in both portrait and landscape modes. First, we propose the method of designing the fixed parallax barrier that can be used in both portrait and landscape modes with low-crosstalk and without moiré. Next, we propose the method for rendering stereo images corresponding to the screen orientation and the viewer’s position. In addition, the eye tracking system with the 3D camera determines the screen orientation and changes the rendering method according to it so that the 3D images can always be observed in both portrait and landscape modes. To verify the effectiveness of the proposed method, we constructed the prototype system using the tablet with 3D camera and the parallax barrier of the slanted angle of 45 degrees. And we confirmed that the system can display high-quality 3D images with the crosstalk ratio of less than 4% in both portrait and landscape modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.