In consumer imaging, the spatial resolution of thermal microbolometer arrays is limited by the large physical size of the individual detector elements. This also limits the number of pixels per image. If thermal sensors are to find a place in consumer imaging, as the newly released FLIR One would suggest, this resolution issue must be addressed. Our work focuses on improving the output quality of low resolution thermal cameras through computational means. The method we propose utilises sub-pixel shifts and temporal variations in the scene, using information from thermal and visible channels. Results from simulations and lab experiments are presented.
Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of
images and many methods for camera motion removal have been developed. Central to all techniques is the correct
recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on
using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the
translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the
achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an
estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred
image to adaptively improve it.
The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF
is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square
(LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images.
This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the
proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is
improved compared to 2 gyro only approach or to blind image de-convolution results.
Extracting people from background in digital photography is a task of great importance, with many applications for digital cameras. Yet, the task poses a number of challenging technical problems to be tackled. In this paper we propose a novel technique for people extraction from background which is both accurate and of low computational complexity therefore amenable to be embedded in digital cameras. The proposed technique uses frames from the camera live view mode (called previews) (now widely available in digital cameras and even in the latest DSLRs) in conjunction with the flash. The basic principle of the method is to acquire two images of the same scene, one with flash and the other without. The use of preview images over two captured images makes the solution easily embeddable in digital cameras. In the proposed setup, in daylight conditions, the flash is triggered at the time of the penultimate preview image. The mask of the subject is then computed based on the intensity difference between the last two previews. For night scenes, where the flash power is required for the acquisition of the actual picture, the subject is detected based on the intensity difference between the final image downsampled to the size of the preview and the average of the last two previews. Additional problems posed by this setup, e.g. misalignments, false positives, incomplete subject map, are also addressed. The resulting foreground map is further used to obtain a narrow depth-of-field version of the initial photograph, by keeping the foreground unaltered while blurring the background.
In the last years the huge evolution of digital photography lead to an increasing interest in developing
algorithms for indexing and classifying collections of digital images. This paper presents an automatic
system for organizing and browsing through consumer digital image collections using the persons in the
images as patterns. In order to implement such an automatic system we have to detect and classify the
people in the images according to their similarities. For this we employ algorithms for face detection,
face recognition and additional methods to cope with large variations that are usually present in
consumer images. These additional methods includes using more than one type of classifiers for face
recognition and also using additional information about the person characteristics extracted from other
region than the face. This additional information will be more robust to factors that influence the
accuracy of classical face recognition systems when working with consumer images. The proposed
system was tested using a typical consumer image collection and practical applications using the system
are presented in the end.
Changing the lens of a DSLR camera has the drawback of allowing small dust particles from the environment to be attracted onto the sensors' surface. As a result, unwanted blemishes may compromise the normally high quality of photographs. The particles can be removed by physically cleaning the sensor. A second, more general approach is to locate and remove the blemishes from digital photos by employing image processing algorithms.
This paper presents a model that allows computing the physical appearance (actual size, shape, position and transparency) of blemishes in a photograph as a function of camera settings.
In order to remove these blemishes with sufficient accuracy, an initial algorithm calibration must be performed for any given pair camera-lens. The purpose of this step is to estimate some parameters of the model that are not readily available. To achieve this, a set of "calibration images" must be carefully taken under conditions that will allow the blemishes to become easily identifiable. Then, based on the metadata stored in the photo's header, the actual appearance of the blemishes in the given photograph is computed and used in the automatic removing algorithm. Computing formulas and results of our experiments are also included.
Conference Committee Involvement (5)
Digital Photography X
3 February 2014 | San Francisco, California, United States
Digital Photography IX
4 February 2013 | Burlingame, California, United States
Digital Photography VIII
23 January 2012 | Burlingame, California, United States
Digital Photography VII
24 January 2011 | San Francisco Airport, California, United States
Digital Photography VI
18 January 2010 | San Jose, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.