Conventional integral three-dimensional images, either acquired by cameras or generated by computers, suffer from narrow viewing range. Many methods to enlarge the viewing range of integral images have been suggested. However, by far they all involve modifications of the optical systems, which normally make the system more complex and may bring other drawbacks in some designs. Based on the observation and study of the computer generated integral images, this paper quantitatively analyzes the viewing properties of the integral images in conventional configuration and its problem. To improve the viewing properties, a new model, the maximum viewing width (MVW) configuration is proposed. The MVW configured integral images can achieve the maximum viewing width on the viewing line at the optimum viewing distance and greatly extended viewing width around the viewing line without any modification of the original optical display systems. In normal applications, a MVW integral image also has better viewing zone transition properties than the conventional images. The considerations in the selection of optimal parameters are discussed. New definitions related to the viewing properties of integral images are given. Finally, two potential application schemes of the MVW integral images besides the computer generation are described.
Integral imaging is a technique capable of displaying images with continuous parallax in full natural color. This paper presents a method of extracting depth map from integral images through viewpoint image extraction. The approach starts with the constructions of special viewpoint images from the integral image. Each viewpoint image contains a two-dimensional parallel recording of the three-dimensional scene. A new mathematical expression giving the relationship between object depth and the corresponding viewpoint image pair displacement is derived by geometrically analyzing the integral recording process. The depth can be calculated from the corresponding displacement between two viewpoint images. A modified multibaseline algorithm, where the baseline is defined as the sample distance between two viewpoint images, is further adopted to integrate the information from multiple extracted viewpoint images. The developed depth extraction method is validated and applied to both real photographic and computer generated unidirectional integral images. The depth measuring solution gives a precise description of the object thickness with an error of less than 0.3% from the photographic image in the example.
This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image.
Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.
The objectives of this paper is to present a novel edge extraction algorithm, based on differentiation of the local histograms of small non-overlapping blocks of the output of the first derivative of a narrow 2D Gaussian filter. It is shown that the proposed edge extraction algorithm provides the best trade off between noise rejection and accurate edge localisation and resolution. The proposed edge detection algorithm starts by convolving the image with a narrow 2D Gaussian smoothing filter to minimise the edge displacement, and increase the resolution and detectability. Processing of the local histogram of small non-overlapping blocks of the edge map is carried out to perform an additional noise rejection operation and automatically determine the local thresholds. The results obtained with the proposed edge detector are compared to the Canny edge detector
KEYWORDS: 3D modeling, 3D image processing, Imaging systems, Cameras, Ray tracing, Integral imaging, Computing systems, Coded apertures, Visualization, Image processing
For computer generated integral images, a transition line can be observed when the viewer shifts parallel to the lens sheet and reaches the edge of current viewing zone during viewing. This is due to the transition from current viewing zone to the next. The images generated using conventional algorithms will suffer from big transition zone, which damages the replaying visual effect and greatly decreases the effective viewing width. This phenomenon is especially apparent for large size images. The conventional computer generation algorithms of integral images use the same boundary configuration as the micro-lenses, which is straightforward and easy to be implemented, but the cause of large transition zone and narrow viewing angle. This paper presents a novel micro-image configuration and algorithm to solve the problem. In the new algorithm, the boundaries of micro-images are not confined by the physical boundaries but normally larger than them. To achieve the maximum effective viewing width, each micro-image is arranged according to the rules decided by several constraints. The considerations in the selection of optimal parameters are discussed, and new definitions related to this issue are given.
KEYWORDS: 3D image processing, 3D displays, Microlens array, Cameras, LCDs, Integral imaging, 3D modeling, Image quality, Image transmission, Computer programming
The development of 3D TV systems and displays for public use require that several important criteria be satisfied. The criteria are that the perceived resolution is as good as existing 2D TV, the image must be in full natural colour, compatibility with current 2D systems in terms of frame rate and transmission data must be ensured, human-factors concerns must be satisfied and seamless autostereoscopic viewing provided. There are several candidate 3D technologies, for example stereoscopic multiview, holographic and integral imaging that endeavor to satisfy the technological and other conditions.
The perceived advantages of integral imaging are that the 3D data can be captured by a single aperture camera, the display is a scaled 3D optical model, and in viewing accommodation and convergence are as in normal sighting (natural) thereby preventing possible eye strain. Consequently it appears to be ideal for prolonged human use. The technological factors that inhibited the possible use of integral imaging for TV display have been shown to be less intractable than at first thought. For example compression algorithms are available such that terrestrial bandwidth is perfectly suitable for transmission purposes. Real-time computer generation of integral images is feasible and the high-resolution LCD panels currently available are sufficient to enable high contrast and high quality image display.
KEYWORDS: 3D image processing, Cameras, 3D modeling, Imaging systems, 3D displays, Image processing, Computing systems, Televisions, Integral imaging, Computer simulations
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
KEYWORDS: Image analysis, 3D image processing, Photography, Image processing, Error analysis, 3D displays, Integral imaging, Point spread functions, Image acquisition, 3D image reconstruction
Integral imaging is a technique capable of displaying images with continuous parallax in full natural color. This paper presents a modified multi-baseline method for extracting depth information from unidirectional integral images. The method involves first extracting sub-images from the integral image. A sub-image is constructed by extracting one pixel from each micro-lens rather than a macro-block of pixels corresponding to a micro-lens unit. A new mathematical expression giving the relationship between object depth and the corresponding sub-image pair displacement is derived by geometrically analyzing the three-dimensional image recording process. A correlation- based matching technique is used fo find the disparity between two sub-images. In order to improve the disparity analysis, a modified multi-baseline technique where the baseline is defined as the distance between two corresponding pixels in different sub-images is adopted. The effectiveness of this modified multi-baseline technique in removing the mismatching caused by similar patterns in object scenes has been proven by analysis and experiment results. The developed depth extraction method is validated and applied to both photographic and computer generated unidirectional integral images. The depth estimation solution gives a precise description of object thickness with an error of less than 1.0% from the photographic image in the example.
This paper presents for the first time, a theory for obtaining the optimum pixel grouping for improving the coherence and the shadow cache in integral 3D ray-tracing in order to reduce execution time. A theoretical study of the number of shadow cache hits with respect to the properties of the lenses and the shadow size and its location is discussed with analysis for three different styles of pixel grouping in order to obtain the optimum grouping. The first style traces rows of pixels in the horizontal direction, the second traces similar pixels in adjacent lenses in the horizontal direction, and the third traces columns of pixels in the vertical direction. The optimum grouping is a combination of all three dependant up on the number of cache hits in each. Experimental results show validation of the theory and tests on benchmark scenes show that up to a 37% improvement in execution time can be achieved by proper pixel grouping.
A new parallel multiplier design is proposed based on the technique of partitioning the operands into four groups however using different grouping and a combination of 4:2 compressor carry save adders for the accumulation of the 16 partial product terms. Also a design methodology of parallel multiplier is proposed which gives the designer more flexibility in finding the best trade off between the throughput rate and the hardware cost.
Systolic architectures for 2D digital filters are presented. The structures are derived directly from the transfer function. The proposed 2D systolic arrays for 2D digital filters have several advantages over the existing 2D arrays, such as modularity and use of nearest neighbor interconnections. These two features make the proposed architecture versatile and more suitable for VLSI implementation.
A new realization of DPCM video signal/image processing that provides the designer with more flexibility in finding the best trade off between throughput rate and hardware cost is introduced. This is achieved by combining the digit-serial computation with the DPCM video signal processing. The advantage of the posed realization is that the size of the memory used for multiplication can be reduced by a factor of at least 32 compared to 16 in the existing DPCM implementations.
An automatic edge thresholding approach, based on investigation of local histograms of small nonoverlapping blocks of the quantized edge magnitude, is proposed. The edge magnitude is first quantized then divided into small nonoverlapping blocks. A threshold for each block is chosen using an iterative procedure. In this paper the effect of the choice of the quantizer is investigated using a quantitative measure. The performance of three quantizers is studied and compared to the result obtained without quantization of the gradient image and to previously reported method for automatic threshold selection for edge detection.
KEYWORDS: Image compression, 3D video compression, 3D image processing, Video compression, Video, 3D displays, Televisions, Quantization, Video coding, Imaging systems
An integral imaging system is employed as part of a three dimensional imaging system, allowing display of full color images with continuous parallax within a wide viewing zone. A novel approach to the problem of compressing the significant quantity of data required to represent integral 3D video is presented and it is shown that the reduction in bit cost achieved makes possible transmission via conventional broadcast channels.
An algorithm for image compression, based on local histogram analysis, is presented. A given image is compressed by dividing the image into nonoverlapping square blocks and coding the edge information in each block. The edge information is extracted by first differentiating the original image, quantizing the differential image, then investigated the local histogram of small blocks of the differential image. Depending on the behavior of the local histograms in the differential image, the corresponding blocks in the original image are classified into visually active and visually continuous blocks. The visually continuous blocks are coded using the mean value only. A visually active block is coded using the location and orientation of the edge within the block. As a result, the compression ratio of the proposed algorithm depends on the behavior of the local histogram, which in turn depends heavily on the quantization process of the differential image. In this paper, the effect of the quantization of the differential image on the compression ratio and the image quality is discussed.
In this paper, a novel approach is proposed for selecting the thresholds of edge strength maps from its local histogram. This threshold selection technique is based on finding the threshold for small blocks of the edge map. For each block the threshold is chosen using an iterative procedure. The effect of the choice of the size of the block is discussed. In this paper, the edge strength map is quantized to reduce the computation of the iterative threshold selection algorithm as well as the memory requirement. It is shown that the quantization of the edge map improves the performance of the local iterative threshold selection algorithm. Typical examples of the tests carried out are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.