KEYWORDS: Cameras, Photography, Control systems, Digital cameras, Scene classification, Camera shutters, Digital imaging, Machine vision, Pattern recognition, Computer vision technology
In this work we propose a method to build digital still cameras that can take pictures of a given scene with the knowledge of photographic experts, professional photographers. Photographic expert' knowledge means photographic experts' camera controls, i.e. shutter speed, aperture size, and ISO value for taking pictures of a given scene. For the implementation of photographic experts' knowledge we redefine the Scene Mode of currently commercially available digital cameras. For example instead of a single Night Scene Mode in conventional digital cameras, we break it into 76 scene modes with the Night Scene Representative Image Set. The idea of the night scene representative image set is the image set which can cover all the cases of night scene with respect to camera controls. Meanwhile to appropriate picture taking of all the complex night scene cases, each one of the scene representative image set comes along with corresponding photographic experts' camera controls such as shutter speed, aperture size, and ISO value. Initially our work pairs off a given scene with one of our redefined scene modes automatically, which is the realization of photographic experts' knowledge. With the scene representative set we use likelihood analysis for the given scene to detect whether it is within the boundary of the representative set or not. If the given scene is classified within the representative set it is proceeded to calculate the similarities with comparing the correlation coefficient between the given scene and each of the representative images. Finally the camera controls for the most similar one of the representative image set is used for taking picture of the given scene, with finer tuning with respect to the degree of the similarities.
Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of
photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or
'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic
functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical
factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image
blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to
enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene
Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency
component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of
human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception
physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users
and photographers, as well.
This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.
Utilizing off the shelf low cost parts, we have constructed a robot that is small, light, powerful and relatively inexpensive (< $3900). The system is constructed around the Beowulf concept of linking multiple discrete computing units into a single cooperative system. The goal of this project is to demonstrate a new robotics platform with sufficient computing resources to run biologically-inspired vision algorithms in real-time. This is accomplished by connecting two dual-CPU embedded PC motherboards using fast gigabit Ethernet. The motherboards contain integrated Firewire, USB and serial connections to handle camera, servomotor, GPS and other miscellaneous inputs/outputs. Computing systems are mounted on a servomechanism-controlled off-the-shelf “Off Road” RC car. Using the high performance characteristics of the car, the robot can attain relatively high speeds outdoors. The robot is used as a test platform for biologically-inspired as well as traditional robotic algorithms, in outdoor navigation and exploration activities. Leader following using multi blob tracking and segmentation, and navigation using statistical information and decision inference from image spectral information are discussed. The design of the robot is open-source and is constructed in a manner that enhances ease of replication. This is done to facilitate construction and development of mobile robots at research institutions where large financial resources may not be readily available as well as to put robots into the hands of hobbyists and help lead to the next stage in the evolution of robotics, a home hobby robot with potential real world applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.