Conventionally there exist two major methods to create mosaics in 3D videos. One is to duplicate the area of mosaics
from the image of one viewpoint (the left view or the right view) to that of the other viewpoint. This method, which is
not capable of expressing depth, cannot give viewers a natural perception in 3D. The other method is to create the
mosaics separately in the left view and the right view. With this method the depth is expressed in the area of mosaics, but
3D perception is not natural enough. To overcome these problems, we propose a method to create mosaics by using a
disparity map. In the proposed method the mosaic of the image from one viewpoint is made with the conventional
method, while the mosaic of the image from the other viewpoint is made based on the data of the disparity map so that
the mosaic patterns of the two images can give proper depth perception to the viewer. We confirm that the proposed
mosaic pattern using a disparity map gives more natural depth perception of the viewer by subjective experiments using
a static image and two videos.
We have been developing an autostereoscopic display with directional backlight using Fresnel lens array. The system was originally composed of a dot matrix light source and a convex lens array and a LCD panel. We have previously proposed the methods to achieve uniform brightness and to expand the viewing zone free from crosstalk. The way to achieve uniform brightness is to add a vertical diffuser between the convex lens array and the LCD panel. The way to expand the viewing zone free from the crosstalk is to attach a large aperture convex lens onto the surface of the convex lens array. However, there still is a drawback that the viewing angle with homogenized brightness is narrow due to the darker peripheral part of the display region than the central part. In this paper two methods to enhance the viewing angle with homogenized brightness are proposed. The first one is to place two mirror boards on the upper end and the lower end between the convex lens array and the LCD panel horizontally. The second one is to place the large aperture convex lens just behind the LCD panel. By the first method, it is expected to reflect the directional light vertically and to make the upper and the lower part of the display region brighter, which enhances the viewing angle vertically. By the second method, it is expected that the directional light from the light source can be utilized more efficiently, which enhances the viewing angle horizontally and vertically.
KEYWORDS: Cameras, Imaging systems, Integral imaging, 3D displays, Prototyping, Calibration, 3D image processing, Clouds, 3D volumetric displays, Video
A real-time and wide-field-of-view image pickup system for coarse integral volumetric imaging (CIVI) is realized. This
system is to apply CIVI display for live action videos generated by the real-time 3D reconstruction. By using multiple
RGB-D cameras from different directions, a complete surface of the objects and a wide field of views can be shown in
our CIVI displays. A prototype system is constructed and it works as follows. Firstly, image features and depth data are
used for a fast and accurate calibration. Secondly, 3D point cloud data are obtained by each RGB-D camera and they are
all converted into the same coordinate system. Thirdly, multiview images are constructed by perspective transformation
from different viewpoints. Finally, the image for each viewpoint is divided depending on the depth of each pixel for a
volumetric view. The experiments show a better result than using only one RGB-D camera and the whole system works
on the real-time basis.
4-view parallax barrier is considered to be a practical way to solve the viewing zone issue of conventional 2-view
parallax barrier. To realize a flickerless 4-view system that provides full display resolution to each view, quadruple timedivision
multiplexing with a refresh rate of 240 Hz is necessary. Since 240 Hz displays are not easily available yet at this
moment, extra efforts are needed to reduce flickers when executing under a possible lower refresh rate. In our last work,
we have managed to realize a prototype with less flickers under 120 Hz by introducing 1-pixel aperture and involving
anaglyph into quadruple time-division multiplexing, while either stripe noise or crosstalk noise stands out. In this paper,
we introduce a new type of time-division multiplexing parallax barrier based on primary colors, where the barrier pattern
is laid like “red-green-blue-black (RGBK)”. Unlike other existing methods, changing the order of the element pixels in
the barrier pattern will make a difference in this system. Among the possible alignments, “RGBK” is considered to be
able to show less crosstalk while “RBGK” may show less stripe noise. We carried out a psychophysical experiment and
found some positive results as expected, which shows that this new type of time-division multiplexing barrier shows
more balanced images with stripe noise and crosstalk controlled at a relatively lower level at the same time.
When a directional backlight to each eye alternates synchronously with the alternation of left-eye and right-eye images on the display panel, the viewer can see a stereoscopic image without wearing special goggles. One way to realize a directional backlight is to place a convex lens array in front of dot matrix light sources to generate collimated light. To implement this method, however, defocusing and field curvature of the lens should be taken into account. The viewing zone of an autostereoscopic display with a directional backlight using a convex lens array is analyzed based on optical simulations.
In this paper, we propose an autostereoscopic display system based on active anaglyph parallax barrier, which provides
four viewpoints in full resolution. This system is realized with 120Hz displays and requires no special optical devices.
With the four viewpoints achieved, a smoother multi-view experience can be achieved when involving head tracking. In
addition, this system can be used for two-viewpoint autostereoscopy that allows more freedom of movement than that of
a conventional parallax barrier system. We made a prototype system with two 120Hz displays and successfully showed
four viewpoints.
KEYWORDS: Distortion, Integral imaging, Lenses, 3D image processing, Optical simulations, Prototyping, 3D volumetric displays, 3D displays, Fresnel lenses, Imaging systems
Coarse integral volumetric imaging (CIVI) combines multiview and volumetric display solutions and presents undistorted floating 3D images by correcting distortion of volumetric image for each view. In the conventional CIVI with limited viewing angle, distortions of image planes can be approximated to be parabolic in the direction of depth, while those in horizontal and vertical directions can be ignored. When the viewing angle becomes wider, however, this approximation cannot realize presentation of the undistorted image. To cope with the strong distortions, the method that the authors propose calculates the z-coordinate of the generated real image in detail and depicts each pixel on the display panel of the corresponding depth. Also distortions in horizontal and vertical directions are corrected by using texture mapping. To attain precise correction in vertical, horizontal and depth directions, optical paths of light rays between the display panel and each viewpoint are calculated with an optical simulator. Color aberration can also be corrected by mapping red, green and blue textures separately based on the result of the optical simulation.
KEYWORDS: Time division multiplexing, Eye, Glasses, LCDs, Stereoscopy, Integral imaging, Camera shutters, Optical filters, 3D image processing, Multiplexing
In the present paper we propose a time-division multiplexing anaglyph method to realize full color
stereoscopy with little flicker at the low refresh rate of 60 Hz, which is compatible with the conventional 2D
displays. Because of the low refresh rate, applying time-division multiplexing method using shutter glasses
to conventional displays results in severe flicker. To overcome this problem, we propose a time-division
multiplexing anaglyph method, where the green components of right-eye image is shown to the right eye and
the red and blue component of left-eye image is shown to the left eye at odd frames, while the red and blue
component of right-eye image is shown to the right eye and the green components of left-eye image is shown
to the left eye at even frames. We carry out an experiment to let the subjects see time-division multiplexing
anaglyph images and the result shows that flicker can be reduced to an acceptable level by the proposed
method. The proposed method can also be applied to widening the viewing angle of time-division
multiplexing integral imaging.
This paper proposes a high resolution integral imaging system using a lens array composed of non-uniform decentered
elemental lenses. One of the problems of integral imaging is the trade-off relationship between the resolution and the
number of views. When the number of views is small, motion parallax becomes strongly discrete to maintain the viewing
angle. The only conventional way to solve this problem is to use a finer lens array and a display panel with a finer pixel
pitch. In the proposed method large display area is used to show a smaller and finer 3D image. To realize it, the
elemental lenses should be smaller than the elemental lenses. To cope with the difference of sizes between the elemental
images and the elemental lenses, the lens array is designed so that the optical centers of elemental lenses are located in
the centers of elemental images, not in the centers of elemental lenses. In addition, new image rendering algorithm is
developed so that undistorted 3D image can be presented with a non-uniform lens array. The proposed design of lens
array can be applied to integral volumetric imaging, where display panels are layered to show volumetric images in the
scheme of integral imaging.
In this paper realization of precise depth perception using coarse integral volumetric imaging (CIVI) is discussed. CIVI
is a 3D display technology that combines multiview and volumetric solutions by introducing multilayered structure to
integral imaging. Since CIVI generates real images optically, optical distortion can cause distortion of 3D space to be
presented. To attain presentation of undistorted 3D space with CIVI, the authors simulate the optics of CIVI and propose
an algorithm to show undistorted 3D space by compensating the optical distortion on the software basis. The authors also
carry out psychophysical experiments to verify that vergence-accommdation conflict is reduced and depth perception of
the viewer is improved by combining multiview and volumetric technologies.
KEYWORDS: Distortion, Integral imaging, 3D image processing, Optical simulations, Prototyping, 3D volumetric displays, 3D displays, Volume rendering, Geometrical optics
Coarse integral volumetric imaging (CIVI) combines multiview and volumetric display solutions and presents
undistorted floating 3D image by correcting distortion of volumetric image for each view. In the conventional CIVI with
limited viewing angle, distortions of image planes can be approximated to be parabolic in the direction of depth, while
those in horizontal and vertical directions can be ignored. When the viewing angle becomes wider, however, this
approximation cannot realize presentation of undistorted image. To cope with the strong distortions, the method the
authors propose calculate z-coordinate of the generated real image is in detail and depicts each pixel on the display panel
of the corresponding depth. Also distortions in horizontal and vertical directions are corrected by using texture mapping.
To attain precise correction in vertical, horizontal and depth directions, optical paths of light rays between the display
panel and each viewpoint are calculated with an optical simulator. Color aberration can also be corrected by mapping red,
green and blue textures separately based on the result of the optical simulation.
Coarse integral imaging (CII), where each elemental lens is large enough to cover pixels far more than the number of
views, can show clear floating 3D image when distortion is corrected. One of the major problems left to be solved for
CII is suppression of pseudo images that appear around the right image to be presented. In this paper we propose two
methods to suppress pseudo images. We first propose use of a lens array with a small F number. When a lens array
composed of elemental lenses whose F number is small is set in front of the display panel, pseudo images can be erased
by total internal reflection on the outskirt of the large aperture lens because the angle of incidence of the light ray that
generates pseudo images becomes larger. The second method we propose is use of a lens array behind the display panel
paired with segmented backlight. When convex lenses are set in front of the backlight with limited aperture, leak of ray
out to adjacent elemental lenses can be avoided. Since the backlight area is reduced, this method can also reduce
consumption of electric power without diminishing brightness of the right image.
This paper proposes an electronic version of coarse integral volumetric imaging (CIVI) display with wide
viewing angle. CIVI is a 3D display solution which combines multiview techniques based on integral
imaging with volumetric techniques using multilayer panels. Though CIVI has solved most of the major
problems of conventional 3D displays, it still has two shortcomings to be overcome. One is the difficulty in
realizing electronic display due to unavailability of electronic color display panels transparent enough to be
layered for volumetric imaging. The other is the limited viewing angle because of the aberration of lenses. As
for the former problem the simplest way to attain electronic version of CIVI is to use half mirrors to merge
multiple images from different depths. Though high quality 3D image can be attained with this method, the
system size becomes large. To avoid bulky mirror system and realize compact system size, the authors
propose layered use of a color panel and multiple monochrome panels to emulate color volumetric
display. To expand viewing angle, the authors propose a display system where smaller CIVI display
components, each of which has little aberration, are connected so that each display plane faces toward the
center of the image optically generated.
This paper proposes new techniques to improve image quality of the coarse integral volumetric display.
Conventional volumetric displays can achieve natural 3D vision without conflict between binocular
convergence and focal accommodation, while they cannot express occlusion or gloss of the objects.
Multiview displays can express the latter while it cannot achieve the former. The coarse integral volumetric
display can realize both natural 3D vision and expression of occlusion and gloss at the same time. Since real
image or virtual image of the display panels are formed in the coarse integral volumetric display, aberration
of the image can become severe. Though the author has proposed an optical setup to remove major
aberration, further improvement is required to achieve better image quality. In this paper the author proposes
DFD for distorted image plane, which can realize natural connections between component images. The
proposed method takes into account the distortion of real/virtual image plane and each 3D pixel is drawn on
the adjacent two distorted image planes so that the brightness may be in inverse proportion to the distance to
each plane. Also the author discusses proper selection of component lens to improve connectivity of image.
In the present paper the authors analyze detailed optics of stereoscopic display combining cylindrical lenses and
embedded striped patterns, which has been proposed to reduce the contradiction between binocular parallax and focal
accommodation of the eyes. The proposed system lets the viewer see an image including high frequency striped patterns
through a cylindrical lens. When the viewer is shown a striped pattern through a cylindrical lens, the depth on which
his/her eyes focus depends on the inclination angle of stripes, for the cylindrical lens works as a lens with different focal
length depending on the orientation of lines. To control the status of accommodation correctly, it is necessary to obtain
the correspondence between the inclination angle of stripes and the focusing distance. To attain this goal we make a
computer simulator to calculate the 3D optical paths. The validity of the computer simulator is confirmed by physical
experiments with a cylindrical lens and a camera finder to measure the focal convergence of striped lines. We also
confirm that this system can induce desired focal accommodation by measuring the eyes of the viewer seeing striped
patterns through a cylindrical lens.
KEYWORDS: Integral imaging, 3D displays, Heads up displays, 3D image processing, Image quality, Fresnel lenses, Imaging systems, Stereoscopic displays, LCDs, 3D volumetric displays
This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D
displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are
demanded in the applications where real-time and precise depth perception is required, such as
teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications,
however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually
induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the
author adopts the coarse integral imaging system, where each component lens is large enough to cover
pixels dozens of times more than the number of views. The merit of this system is that it can induce the
viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen.
This system, however, has major disadvantages in the quality of image, which is caused by aberration of
lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical
optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the
problems of aberration and discontinuity of images.
KEYWORDS: 3D displays, 3D volumetric displays, Polarization, LCDs, 3D image processing, Stereoscopic displays, Liquid crystals, Virtual reality, 3D modeling
The authors propose an electronic 3D display combining a multiview display and a volumetric display.
Conventional multiview displays often give the viewers severe eyestrains because of the contradiction
between binocular convergence and focal accommodation of the eyes. Though volumetric displays are
free from the contradiction, they cannot express occlusion or gloss of the objects. The proposed system
overcomes these disadvantages at once by displaying colors by the multiview display part and fine
contrast of edges by the volumetric display part. As for the multiview display we use conventional
multiview technologies. As for the volumetric, we use multilayer monochrome TFT liquid crystal panels.
Here we can use monochrome panels because the volumetric part is just in charge of expressing edge
contrast. This can sufficiently lead proper accommodation since focal accommodation of our eyes is
dependent only on the edge of the image. To connect the edges of adjacent panels smoothly, we apply
DFD approach, where the point in the middle of two panels is expressed by depiction on both panels.
This paper presents a simple and inexpensive multiview 3D display system composed of a LCD panel, a
convex lens array, and a Fresnel lens. In the proposed system a pair of the LCD fragment and a convex
lens in the array plays the role of a projector. The idea of multiview 3D displays composed of multiple
projectors and a large convex lens or a concave mirror is old and famous. The conventional methods,
however, require diffusers to show continuous motion parallax, which decays the quality of the image. To
solve this problem we use a convex lens array with no gaps between the lenses, which realizes continuous
motion parallax without diffusers. The convex lens array, however, has to produce images without
aberration to show the viewer stable 3D images. It is hard and expensive to realize such lens arrays
without gaps between the component lenses. To produce images with little aberration in a simple format,
the author proposes the optical system where each component lens makes the parallel light rays instead of
creating an image by keeping the distance between the LCD surface and the lens array the same as the
focal distance of the component lenses. To create an image, we use a large convex-type Fresnel lens,
which has been used only for the purpose of distributing multiview images to each viewpoint in the
conventional multi-projection systems. Fresnel lens, receiving parallel light from the lens array, creates a
floating real image at its focal distance and attains distribution of multiview images at the same time. With
this configuration we can create images with little aberration even when we use a lens array composed of
simple convex-type Fresnel lenses widely available with low prices.
The authors propose an inexpensive human interface for teleoperation of mobile robots by giving a perspective-transformed image of a virtual 3D screen on a standard PC display. Conventional teleoperation systems of mobile robots have used multiple screens for multiple cameras or a curved screen for a wide view camera, both of which are expensive solutions intended only for professional use. We adopt a single standard PC display as the display system for the operator to make the system affordable to all PC users. To make the angular location perceivable with a 2D display, the authors propose a method to show on the flat screen a perspective-transformed image of a virtual 180-degree cylindrical screen. In this system the image shown on the 2D screen preserves angular information of the remote place, which can help the operator grasp the angular location of the objects in the image. The result of the experiments indicates that the perspective-transformed images of the cylindrical screen can give the operator a better understanding of the remote world, which enables easier and more instinctive teleoperation.
In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.
Contradiction between convergence and accommodation of our eyes often causes eyestrain and sickness of the viewer in the conventional stereoscopic display systems. Though there exist several methods to solve this problem, they are expensive and are not expected to be commercially available in a near future. The authors propose a novel system that realizes electronic motion images without VR sickness, consisting of cylindrical lenses and electronic image displays with high frequency striped patterns. When we see images with high frequency patterns through cylindrical lenses, we perceive change of blur and focal depth in proportion to the inclination angle of high frequency stripes. Thus this system can control accommodation status of our eyes continuously with inexpensive devices. As for natural scenes, this system uses a filter that blurs high frequency components contained in the scenes except for those needed to lead desired accommodation of our eyes. Thus this system can also induce proper accommodation of our eyes for most natural scenes including various high frequency components.
This paper proposes multiview version of autostereoscopic display FLOATS (Fresnel Lens based Optical Apparatus for Touchable-distance Stereoscopy), which combines generation of floating real image and parallax presentation to show realistic 3-D image within the viewer's reach. Earlier versions of FLOATS have required a head tracker, physical motion control of filters or mirrors, and transformation of image in accordance with the viewer's motion to keep on presenting different images to each eye. To do away with these requirements, we propose two methods which realize multiview presentation to the viewer. One method is to use multiple LCD panels and multiple fixed mirrors instead of mobile mirrors. The other method is to use mutiple projectors, fly-eye lenses, and fresnel lenses. Though the former system doesn't cost much, it is not practical to present more than 10 views. In the latter system it is practical to present more than 30 views, which can realize presentation of both horizontal and vertical parallax. With this technology the viewers can perceive undistorted 3-D space from any angle, which makes it possible for multiple viewers to observe 3-D image at consistent position from different angles at the same time.
The present paper proposes a 3D camera system for teleoperation using autostereoscopic display based on floating real image. To present the operator 3-D images which correspond to his viewpoint, the image has to be updated in accordance with the motion of the operator's head. The proposed method combines camera motion control, which keeps on taking the proper texture for the viewpoint, and the image transformation software, which copes with the fast motion of the viewer the camera motion cannot follow. With this technology, presentation of robust 3-D image is realized.
The author presents a new version of FLOATS (Flesnel-Lens-based Optical Apparatus for Touchable-distance Stereoscopy) system. The autostereoscopic display FLOATS combines parallax presentation and real image generation by lenses to show realistic 3-D images within the reach of the viewer. In the conventional FLOATS, polarizing filters or liquid crystal shutters are used to separate images projected to the eyes, which has caused cross-talk noise and reduction of brightness. This paper proposes use of combined mirrors instead of filters or shutters to realize parallax presentation. In this system the images for the right eye and the left eye are displayed side by side on the screen. Then the combined mirrors reflect each image to shift it to the center so that the optical geometry of this system may be completely the same as the conventional FLOATS display. This new method avoids both cross-talk noise and reduction of brightness and enables presentation of more realistic 3-D image which gives less eyestrain to the viewer. In addition to it, a LCD panel can be used as the screen part of this system because it is only required to show two images side by side. As a result the size of the system is reduced compared with the conventional system.
A reality-enhanced autostereoscopic display system is presented. In this system, the viewers who do not wear any special glasses can perceive 3D images within their hands reach with little sense of incongruity. The feature of this system is combination of real image generation and parallax presentation. Real image of the display in the back is generated in the air by using Fresnel lenses, which has made it possible to narrow artificial parallax to display 3D objects in the workspace near the viewer without interfering the viewers' motion. Smaller artificial parallax leads to 3D perception with more reality and less eyestrain than the conventional 3D displays. For parallax presentation a mobile filter which plays the role of stereoscopic goggles is set between the display in the back and the Fresnel lenses and is controlled so that it follows the motion of the viewer to keep on presenting different images to each eye. To present undistorted 3D space the optical path including refraction by Fresnel lenses is calculated and the image on the screen is updated based on it. Real-time undistorted image presentation to unrestricted eye positions is realized by using texture mapping technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.