In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.
In this paper, we present an algorithm for obtaining an original image which becomes blurred due to the size of
the aperture of the CMOS camera module in capturing images of the objects near the camera. We introduce the
mathematical property of a circulant matrix which can be used to describe the PSF and propose a new algorithm
based on this matrix. We suggest new algorithms for both one-dimensional and two-dimensional signal processing
case. These proposed algorithms were validated by the results of computer simulation for two-dimensional images
synthesized by a CMOS camera model which was based on a pinhole camera model previously proposed by our
research group.
This paper presents a new model of a complementary metal-oxide-semiconductor (CMOS) camera using combinations of several pin hole camera models, and its validity is verified by using synthesized stereo images based on OpenGL software. Our embedded three-dimensional (3-D) image capturing hardware system consists of five motor controllers and two CMOS camera modules based on an S3C6410 processor. An optimal alignment for capturing nine segment images that have their own convergence planes is implemented using a pi controller based on the measures of alignment and sharpness. A new synthesizing fusion with the optimized nine segmentation images is proposed for the best 3-D depth perception. Based on the experimental results of the disparity values in each of the nine segments, the multi-segment method proposed in this paper is a good method to improve the perception of 3-D depth in stereo images.
In this paper, we introduce the harwdare/software technology used for implementing 3D stereo image capturing
system which was built by using two OV3640 CMOS camera modules and camera interface hardware implemented
in S3C6410 MCP. We also propose multi-segmented method to capture an image for better 3D depth feeling.
An image is composed of 9 segmented sub images each of which is captured by using two degree of freedom
in DC servo per each left and right CMOS cameras module for the improving the focusing problem in each
segmented sub image. First, we analyze the whole image. We hope and sure that this new method will improve
the comfortable 3D depth feeling even though its synthesizing method is a little complicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.