KEYWORDS: Microlens, Integral imaging, 3D displays, 3D image processing, Microlens array, Sensors, Light, Image sensors, 3D image reconstruction, Visualization
Usual problem in 3D integral-imaging monitors is flipping that happens when the microimages are seen from neighbor microlenses. This effect appears when, at high viewing angles, the light rays emitted by any elemental image are not passing through the corresponding microlens. A usual solution of this problem is to insert and a set of physical barriers to avoid this crosstalk. In this contribution we present a pure optical alternative of physical barriers. Our arrangement is based on Köhler illumination concept, and avoids that the rays emitted by one microimage to impinge the neighbor microlens. The proposed system does not use additional lenses to project the elemental images, so no optical aberrations are introduced.
KEYWORDS: Cameras, 3D displays, 3D image processing, Microlens, Integral imaging, Sensors, 3D image reconstruction, Image processing, Imaging systems, Microlens array
Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.
KEYWORDS: Cameras, Microlens, Integral imaging, 3D image processing, Microlens array, Sensors, Near field, 3D displays, 3D visualizations, Photographic lenses
One of the differences between the near-field integral imaging (NInI) and the far-field integral imaging
(FInI), is the ratio between number of elemental images and number of pixels per elemental image. While
in NInI the 3D information is codified in a small number of elemental images (with many pixels each), in
FInI the information is codified in many elemental images (with only a few pixels each). The later codification
is similar that the one needed for projecting the InI field onto a pixelated display when aimed to
build an InI monitor. For this reason, the FInI cameras are specially adapted for capturing the InI field
with display purposes. In this contribution we research the relations between the images captured in NInI
and FInI modes, and develop the algorithm that permits the projection of NInI images onto an InI monitor.
Integral Imaging is a technique to obtain true color 3D images that can provide full and continuous motion parallax for several viewers. The depth of field of these systems is mainly limited by the numerical aperture of each lenslet of the microlens array. A digital method has been developed to increase the depth of field of Integral Imaging systems in the reconstruction stage. By means of the disparity map of each elemental image, it is possible to classify the objects of the scene according to their distance from the microlenses and apply a selective deconvolution for each depth of the scene. Topographical reconstructions with enhanced depth of field of a 3D scene are presented to support our proposal.
KEYWORDS: Systems modeling, Image resolution, Liquid crystals, Cameras, Monte Carlo methods, Geometrical optics, Imaging systems, Data modeling, 3D modeling, Complex systems
Complex multidimensional capturing setups such as plenoptic cameras (PC) introduce a trade-off between various
system properties. Consequently, established capturing properties, like image resolution, need to be described
thoroughly for these systems. Therefore models and metrics that assist exploring and formulating this trade-off
are highly beneficial for studying as well as designing of complex capturing systems. This work demonstrates the
capability of our previously proposed sampling pattern cube (SPC) model to extract the lateral resolution for
plenoptic capturing systems. The SPC carries both ray information as well as focal properties of the capturing
system it models. The proposed operator extracts the lateral resolution from the SPC model throughout an
arbitrary number of depth planes giving a depth-resolution profile. This operator utilizes focal properties of the
capturing system as well as the geometrical distribution of the light containers which are the elements in the SPC
model. We have validated the lateral resolution operator for different capturing setups by comparing the results
with those from Monte Carlo numerical simulations based on the wave optics model. The lateral resolution
predicted by the SPC model agrees with the results from the more complex wave optics model better than both
the ray based model and our previously proposed lateral resolution operator. This agreement strengthens the
conclusion that the SPC fills the gap between ray-based models and the real system performance, by including
the focal information of the system as a model parameter. The SPC is proven a simple yet efficient model for
extracting the lateral resolution as a high-level property of complex plenoptic capturing systems.
In this research we have proposed a new definition for three-dimensional (3-D) integral imaging resolution. The general
concept of two-dimensional (2-D) resolution used also for 3-D is failed to describe the 3-D resolvability completely.
Thus, the researches focused on resolution improvement in 3-D integral imaging systems, didn't investigate thoroughly
the effect of their method on the 3-D quality. The effect has only been shown on the 2-D resolution of each lateral reconstructed
image. The newly introduced 3-D resolution concept has been demonstrated based on ray patterns, the
cross-section between them and the sampling points. Consequently the effect of resulting sampling points in 3-D resolvability
has been discussed in different lateral planes. Simulations has been performed which confirm the theoretical
statements.
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens
that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed
images is determined by the spatial density of microlenses in the array. In this paper we report a simple method,
based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments
are reported to support the proposed approach.
KEYWORDS: Imaging systems, Cameras, 3D image processing, Integral imaging, Image resolution, 3D displays, 3D image reconstruction, Stereoscopic cameras, Data processing, Light
An analysis and comparison of the lateral and the depth resolution in the reconstruction of 3D scenes from images obtained
either with a classical two view stereoscopic camera or with an Integral Imaging (InI) pickup setup is presented.
Since the two above systems belong to the general class of multiview imaging systems, the best analytical tool for the
calculation of lateral and depth resolution is the ray-space formalism, and the classical tools of Fourier information
processing. We demonstrate that InI is the optimum system to sampling the spatio-angular information contained in a
3D scene.
KEYWORDS: 3D image processing, Integral imaging, 3D displays, Reconstruction algorithms, 3D image reconstruction, Visualization, Displays, Microlens array, Algorithm development, Microlens
Previously, we reported a digital technique for formation of real, non-distorted, orthoscopic integral images by direct
pickup. However the technique was constrained to the case of symmetric image capture and display systems. Here, we
report a more general algorithm which allows the pseudoscopic to orthoscopic transformation with full control over the
display parameters so that one can generates a set of synthetic elemental images that suits the characteristics of the
Integral-Imaging monitor and permits control over the depth and size of the reconstructed 3D scene.
KEYWORDS: 3D image processing, Integral imaging, 3D displays, Image resolution, Reconstruction algorithms, LCDs, Microlens array, Digital cameras, Synthetic aperture radar, 3D acquisition
Integral imaging (InI) technology was created with the aim of providing the binocular observers of monitors, or matrix
display devices, with auto-stereoscopic images of 3D scenes. However, along the last few years the inventiveness of
researches has allowed to find many other interesting applications of integral imaging. Examples of this are the application
of InI in object recognition, the mapping of 3D polarization distributions, or the elimination of occluding signals.
One of the most interesting applications of integral imaging is the production of views focused at different depths of the
3D scene. This application is the natural result of the ability of InI to create focal stacks from a single input image. In
this contribution we present new algorithm for this optical slicing application, and show that it is possible the 3D reconstruction
with improved lateral resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.