PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 7250, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital
still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed
by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four
large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to
obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier
provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects
typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability
of multi-layer reticles, the sensor development is cost effective for prototyping.
Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly
reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained.
Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography
demands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photo collection efficiency (proportional to the sensitivity of the photo sensor) and color crosstalk, both optical and electrical, are extremely important CMOS image sensor (CIS) pixel parameters. In measured QE data, photo collection efficiency and crosstalk information are mixed and it is difficult to disentangle the contributions of each to the raw QE spectrum. In our pixel optimization work, it is desirable to extract each component and to further separate the contribution of the color filter array (CFA) from the fundamental processes in and above the Silicon. In this paper, a new approach is introduced to extract the QE data related to the Si processing and decompose it into two components, the Mono QE and crosstalk spectrum, respectively. Using this approach one may gauge the impact of pixel structure differences, realize the sensor design goals to achieve the targeted system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CMOS image sensors are now widely used in digital imaging systems as pixel size has steadily decreased to allow higher-resolution imaging. When the pixel size scales below 2 &mgr;m, however, microlens performance is significantly affected by diffraction from the edges of the image sensor pixel. This results not only in quantitative performance degradation, but also in a qualitative shift in functionality. We perform a systematic analysis of microlens design during lateral scaling of CMOS image sensor pixels. The optical efficiency and optical crosstalk are calculated with a first-principles finite-difference time-domain (FDTD) method. We find that there are two regimes of operation for three-metal-layer pixels depending on pixel size and wavelength: a refraction-dominated regime for pixel sizes larger than 1.45 &mgr;m and a diffraction-dominated regime for pixel sizes smaller than 1.45 &mgr;m. In the refraction-dominated regime, the microlens can be designed and optimized to perform its concentrating function. In the diffraction-dominated regime, the optimal radii of curvature for microlenses are very large and a flat microlens layer, in fact, becomes the best choice and performance is severely degraded. Under these circumstances, the microlens no longer fulfills its optical function as a focusing element. To extend the functionality of the microlens beyond the 1.45 &mgr;m node, we predict that a one-metal-layer dielectric stack or shorter will be required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we numerically quantify the information capacity of a sensor, by examining the different factors than can
limit this capacity, namely sensor spectral response, noise, and sensor blur (due to fill factor, cross talk and diffraction,
for given aperture). In particular, we compare the effectiveness of raw color space for different kinds of sensors. We also
define an intrinsic notion of color sensitivity that generalizes some of our previous works. We also attempt to discuss
how metamerism can be represented for a sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modification to the standard Bayer CFA and photodiode structure for CMOS image sensors is proposed, which we call
2PFCTM, meaning "Two Pixel, Full Color". The blue and red filters of the Bayer pattern are replaced by magenta filters.
Under each magenta filter are two stacked, pinned photodiodes; the diode nearest the surface absorbs mostly blue light
and the deeper diode absorbs mostly red light. The magenta filter absorbs green light, improving color separation
between the resulting blue and red diodes. The dopant implant defining the bottom of the red-absorbing region can be
made the same as the green diodes, simplifying the fabrication. Since the spatial resolution for the red, green, and blue
channels are identical, color aliasing is greatly reduced. Luminance resolution can also be improved, the thinner diodes
lead to higher well capacity with resulting better dynamic range, and fabrication costs can be similar to or less than
standard Bayer CMOS imagers. Also, the geometry of the layout lends itself naturally to frequency-based demosaicing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under low illumination conditions, such as moonlight, there simply are not enough photons present to create a high quality color image with integration times that avoid camera-shake. Consequently, conventional imagers are designed for daylight conditions and modeled on human cone vision. Here, we propose a novel sensor design that parallels the human retina and extends sensor performance to span daylight and moonlight conditions. Specifically, we describe an interleaved imaging architecture comprising two collections of pixels. One set of pixels is monochromatic and high sensitivity; a second, interleaved set of pixels is trichromatic and lower sensitivity. The sensor implementation requires new image processing techniques that allow for graceful transitions between different operating conditions. We describe these techniques and simulate the performance of this sensor under a range of conditions. We show that the proposed system is capable of producing high quality images spanning photopic, mesopic and near scotopic conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most digital cameras employ a spatial subsampling process, implemented as a color filter array (CFA), to capture
color images. The choice of CFA patterns has a great impact on the performance of subsequent reconstruction
(demosaicking) algorithms. In this work, we propose a quantitative theory for optimal CFA design. We view
the CFA sampling process as an encoding (low-dimensional approximation) operation and, correspondingly,
demosaicking as the best decoding (reconstruction) operation. Finding the optimal CFA is thus equivalent to
finding the optimal approximation scheme for the original signals with minimum information loss. We present
several quantitative conditions for optimal CFA design, and propose an efficient computational procedure to
search for the best CFAs that satisfy these conditions. Numerical experiments show that the optimal CFA
patterns designed from the proposed procedure can effectively retain the information of the original full-color
images. In particular, with the designed CFA patterns, high quality demosaicking can be achieved by using
simple and efficient linear filtering operations in the polyphase domain. The visual qualities of the reconstructed
images are competitive to those obtained by the state-of-the-art adaptive demosaicking algorithms based on the
Bayer pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents two new architectures of an image-processing (IP) pipeline for a digital color camera, which are
referred to as a prior-decomposition approach and a posterior-decomposition approach. The prior-decomposition
approach firstly decomposes each primary color channel of mosaicked raw color data as a product of its two
components, a structure component and a texture component, with a proper multiplicative monochrome-image
decomposition method such as the BV-G variational decomposition method. Each component is then demosaicked with
its proper color-interpolation method. Next, white balancing, color enhancement and inverse gamma correction are
applied to only the structure component. Finally, the two components are combined. On the other hand, the posteriordecomposition
approach firstly interpolates mosaicked raw color data with a proper demosaicing method, and
decomposes its demosaicked color image with our proposed multiplicative color-image decomposition method utilizing
inter-channel color cross-correlations. The subsequent processing is performed in the same manner as in the priordecomposition
approach. Our proposed two architectures produce a high-quality output full-color image, but somewhat
differ in performance. We experimentally compare their performance, and discuss the merits and the demerits of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chrominance noise appears as low frequency colored blotches throughout an image, especially in darker flat areas. The effect is more pronounced in lower light levels where the characteristic features are observed as irregularly shaped clusters of colored pixels that vary anywhere from 15 to 25 pixels across. This paper proposes a novel, simple and intuitive method of reducing chrominance noise in processed images while minimizing color bleeding artifacts. The approach is based on a hybrid multi scale spatial dual tree adaptive wavelet filter in hue-saturation-value color space. Results are provided in terms of comparisons on real images between the proposed method and another state of the art method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts
introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and
other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving
this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image.
Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we
propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms,
in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge
emphasis process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current color constancy methods are based on an image processing of the sensor's RGB data to estimate the color of illumination. Unlike previous methods, whitebalPR measures the illuminant by separating diffuse and specular components in a scene by taking advantage of the polarizing effect occurring to light reflection. Polarization difference imaging (PDI) detects the polarization degree of the neutrally reflected (specular) parts and eliminates the remitted (diffuse) non-polarized colored parts.
Different experiments explore the signal level within the polarization difference image in relation to multicolored objects, different object surfaces and to the arrangement of light source, camera and object. The results exhibit a high accuracy of measuring the color of illumination for glossy and matte surfaces. As these setups work best for achromatic objects, this new approach for data analysis combines the ideas of the dichromatic reflection model (DRM) and whitebalPR and delivers reliable results for mainly colored objects. Unlike the DRM needs to segment the image referring to the objects in the scene, the new proposal (polarization difference line imaging, PDLI) is independent from any knowledge of the image content. A further arbitrarily segmentation of the image into macro-pixels of any size reduces the computational effort and diminishes the impact of noise on the PDI signal. An according experiment visualizes the coherency between the size of the macro-pixels, the angle of incidence and the accuracy of the process. To sum up, by means of the segmentation the PDLI process gains further stabilization in detecting the color of the illuminant while the computational effort decreases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital camera sensors are sensitive to wavelengths ranging from the ultraviolet (200-400nm) to the near-infrared
(700-100nm) bands. This range is, however, reduced because the aim of photographic cameras is to capture and
reproduce the visible spectrum (400-700nm) only. Ultraviolet radiation is filtered out by the optical elements of
the camera, while a specifically designed "hot-mirror" is placed in front of the sensor to prevent near-infrared
contamination of the visible image.
We propose that near-infrared data can actually prove remarkably useful in colour constancy, to estimate the
incident illumination as well as providing to detect the location of different illuminants in a multiply lit scene.
Looking at common illuminants spectral power distribution show that very strong differences exist between
the near-infrared and visible bands, e.g., incandescent illumination peaks in the near-infrared while fluorescent
sources are mostly confined to the visible band.
We show that illuminants can be estimated by simply looking at the ratios of two images: a standard RGB
image and a near-infrared only image. As the differences between illuminants are amplified in the near-infrared,
this estimation proves to be more reliable than using only the visible band. Furthermore, in most multiple
illumination situations one of the light will be predominantly near-infrared emitting (e.g., flash, incandescent)
while the other will be mostly visible emitting (e.g., fluorescent, skylight). Using near-infrared and RGB image
ratios allow us to accurately pinpoint the location of diverse illuminant and recover a lighting map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holistic representations of natural scenes are an effective and powerful source of information for semantic classification and analysis of arbitrary images. Recently, the frequency domain has been successfully exploited to
holistically encode the content of natural scenes in order to obtain a robust representation for scene classification.
Despite the technological hardware and software advances, consumer single sensor imaging devices technology
are quite far from the ability of recognize scenes and/or to exploit the visual content during (or after) acquisition
time. In this paper we consider the properties of the scenes regarding its naturalness. The proposed method
exploits a holistic representation of the scene obtained directly in the DCT domain and fully compatible with
the JPEG format. Experimental results confirm the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent efforts in CMOS image sensor design have focused on reducing pixel size to increase resolution given a fixed package size. This scaling comes at a cost, as less light is incident on each pixel, potentially leading to poor image quality caused by photon shot noise. One solution to this problem is to allow the imaging or objective lens to capture more light by decreasing its f-number. The larger cone of accepted light resulting from a lower f-number, however, can lead to decreased optical efficiency and increased spatial optical crosstalk at the pixel level when the microlens is not able to properly focus the incident light. In this work, we investigate the effects of imaging lens f-number on sub-2µm CMOS image sensor pixel performance. The pixel is considered as an optical system with an f-number, defined as the ratio of the pixel height to width, and we predict the performance of a realistic pixel structure subject to illumination from an objective lens. For our predictions, we use finite-difference time-domain (FDTD) simulation with continuous-wave, diffraction-limited illumination characterized by the f-number of the imaging lens. The imaging lens f-numbers are chosen to maintain resolution and incident optical power as pixel size scales, while the pixel f-number is varied by modifying the height of the pixel structure. As long as pixel f-number is scaled to match the imaging f-number when pixel size is scaled, optical efficiency and crosstalk for on-axis illumination will not be significantly affected down to the 1.2 &mgr;m pixel node. We find the same trend for system MTF, which does not seem to suffer from diffraction effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for evaluating texture quality as shot by a camera is presented. It is shown that usual sharpness measurements
are not completely satisfying for this task. A new target based on random geometry is proposed. It uses the so-called
dead leaves model. It contains objects of any size at any orientation and follows some common statistics with natural
images. Some experiments show that the correlation between objectives measurements derived from this target and
subjective measurements conducted in the Camera Phone Image Quality initiative are excellent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to improve the validity of noise and resolution measurements on digital cameras. If
non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for
image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details
and a watercolor like appearance of the image. To improve the correlation between objective measurement and
subjective image quality we propose to supplement the standard test methods with an additional measurement
of the texture preserving capabilities of the camera. The proposed method uses a test target showing white
Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use
the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis
is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be
shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more
leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of
photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or
'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic
functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical
factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image
blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to
enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene
Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency
component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of
human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception
physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users
and photographers, as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the number of imaging pixels in camera phones increases, users expect camera phone image quality to be comparable to digital still cameras. The mobile imaging industry is aware, however, that simply packing more pixels into the very limited camera module size need not improve image quality. When the size of a sensor array is fixed, increasing the number of imaging pixels decreases pixel size and thus photon count. Attempts to compensate for the reduction in light sensitivity by increasing exposure durations increase the amount of handheld camera motion blur which effectively reduces spatial resolution. Perversely, what started as an attempt to increase spatial resolution by increasing the number of imaging pixels, may result in a reduction of effective spatial resolution. In this paper, we evaluate how the performance of mobile imaging systems changes with shrinking pixel size, and we propose to replace the widely misused "physical pixel count" with a new metric that we refer to as the "effective pixel count" (EPC). We use this new metric to analyze design tradeoffs for four different pixel sizes (2.8um, 2.2um, 1.75um and 1.4um) and two different imaging arrays (1/3.2 and 1/8 inch). We show that optical diffraction and camera motion make 1.4 um pixels less perceptually effective than larger pixels and that this problem is exacerbated by the introduction of zoom optics. Image stabilization optics can increase the effective pixel count and are, therefore, important features to include in a mobile imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the great advances that have been made in the field of digital photography and CMOS/CCD sensors, several
sources of distortion continue to be responsible for image quality degradation. Among them, a great role is played by
sensor noise and motion blur. Of course, longer exposure times usually lead to better image quality, but the change in the
photocurrent over time, due to motion, can lead to motion blur effects. The proposed low-cost technique deals with the
aforementioned problem using a multi-capture denoising algorithm, obtaining a good quality with sensible reduction of
the motion blur effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently
optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the
longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the
RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for
intermediate distances, and blue for close distances. Comparing sharpness across colors gives an estimation of the object
distance and therefore allows choosing the right set of digital filters as a function of the object distance. Then, by
copying the high frequencies of the sharpest color onto the other colors, we show theoretically and experimentally that it
is possible to achieve a sharp image for all the colors within a larger range of DoF. We compare our technique with other
approaches that also aim to increase the DoF such as Wavefront coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracting people from background in digital photography is a task of great importance, with many applications for digital cameras. Yet, the task poses a number of challenging technical problems to be tackled. In this paper we propose a novel technique for people extraction from background which is both accurate and of low computational complexity therefore amenable to be embedded in digital cameras. The proposed technique uses frames from the camera live view mode (called previews) (now widely available in digital cameras and even in the latest DSLRs) in conjunction with the flash. The basic principle of the method is to acquire two images of the same scene, one with flash and the other without. The use of preview images over two captured images makes the solution easily embeddable in digital cameras. In the proposed setup, in daylight conditions, the flash is triggered at the time of the penultimate preview image. The mask of the subject is then computed based on the intensity difference between the last two previews. For night scenes, where the flash power is required for the acquisition of the actual picture, the subject is detected based on the intensity difference between the final image downsampled to the size of the preview and the average of the last two previews. Additional problems posed by this setup, e.g. misalignments, false positives, incomplete subject map, are also addressed. The resulting foreground map is further used to obtain a narrow depth-of-field version of the initial photograph, by keeping the foreground unaltered while blurring the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel method for generating a background model from a sequence of images with
moving objects. Our approach is based on non-parametric statistics and robust mode estimation using quasicontinuous
histograms (QCH) framework. The proposed method allows the generation of very clear backgrounds
without blur effect, and is robust to noise and small variations. Experimental results from real sequences
demonstrate the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive Paper and Symposium Demonstration Session
This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition
approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image
decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total
Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and
then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the
Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing
undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality
color-image decomposition, which is very robust against colored random noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To measure the spectral response of digital cameras is usually a time-consuming and expensive task. One method to gain
the spectral response data is the use of reflectance charts and estimation algorithms. To improve the quality of the
measurement narrow-band light is necessary. Usually an expensive and complicated monochromator is used to generate
the narrow-band light.
This paper proposes the use of a set of narrow-band interference filters as an alternative to a monochromator. It describes
the measurement setup and data processing. A detailed quality assessment of the measurement data shows, that the
quality is comparable to a measurement with a monochromator. The interference filter equipment is more affordable,
easier to use and faster. The characterization of one device takes less than 10 minutes. The pros and cons compared to
other methods are also discussed.
The setup consists of a set of 39 narrow-band interference filters, which are photographed one after another. A modified
slide projector is used for illumination. Software was developed to read the camera's response to the filter and process
the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a technique which infers interframe motion by tracking SIFT features through consecutive frames:
feature points are detected and their stability is evaluated through a combination of geometric error measures and fuzzy
logic modelling. Our algorithm does not depend on the point detector adopted prior to SIFT descriptor creation: therefore
performance have been evaluated against a wide set of point detection algorithms, in order to investigate how to increase
stabilization quality with an appropriate detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method of digitally removing or correcting Chromatic Aberration (CA) of lens, which
generally occurs in an edge region of image. Based on the information of the lens's and sensor's features in camera, it
determines CA level and the dominant chrominance of CA and efficiently removes extreme CA such as purple fringe
and blooming artifacts, as well as a general CA to be generated at an edge in an image captured by a camera. Firstly, this
method includes a CA region sensing part analyzing a luminance signal of an input image and sensing a region having
CA. Secondly, the CA level sensing part calculates the weight, which indicates a degree of CA, based on a difference
between gradients of color components of the input image. Thirdly, for removing the extreme CA such as purple fringe
and blooming artifact which caused by the feature of lens and sensor, it uses 1-D Gaussian filters having different sigma
values to get the weight. The sigma value indicates the feature of lens and sensor. And, for removing the general CA, it
includes the adaptive filter, based on luminance signal. Finally, by using these weights, final filter will be produced
adaptively with the level of CA and lens's and sensor's features. Experimental results show the effectiveness of this
proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many cases, it is not possible to faithfully capture shadow and highlight image data of a high dynamic range (HDR)
scene using a common digital camera, due to its narrow dynamic range (DR). Conventional solutions tried to solve the
problem with an captured image which has saturated highlight and/or lack of shadow information. In this situation, we
introduce a color image enhancing method with the scene-adaptive exposure control. First, our method recommends an
optimal exposure to obtain more information in highlight by the histogram-based scene analysis. Next, the proposed
luminance and contrast enhancement is performed on the captured image. The main processing consists of luminance
enhancement, multi-band contrast stretching, and color compensation. The luminance and chrominance components of
input RGB data is separated by converting into HSV color space. The luminance is increased using an adaptive log
function. Multi-band contrast stretching functions are applied to each sub-band to enhance shadow and highlight at the
same time. To remove boundary discontinuities between sub-bands, the multi-level low-pass filtering is employed. The
blurred image data represents local illumination while the contrast-stretched details correspond to reflectance of the
scene. The restored luminance image is produced by the combination of multi-band contrast stretched image and multilevel
low-pass filtered image. Color compensation proportional to the amount of luminance enhancement is applied to
make an output image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The lifetime of solid-state image sensors is limited by the appearance of defects, particularly hot-pixels, which we have
previously shown to develop continuously over the sensor lifetime. Analysis based on spatial distribution and temporal
growth of defects displayed no evidence of the defects being caused by material degradation. Instead, high radiation
appears to accelerate defect development in image sensors. It is important to detect these faulty pixels prior to the use of
image enhancement algorithms to avoid spreading the error to neighboring pixels. The date on which a defect has first
developed can be extracted from past images. Previously, an automatic defect detection algorithm using Bayesian
probability accumulation was introduced and tested. We performed extensive testing of this Bayes-based algorithm by
detecting defects in image datasets obtained from four cameras. Our results have indicated that the Bayes detection
scheme was able to identify all defects in these cameras with less than 3% difference from visual inspected result. In
this paper, we introduce an alternative technique, the Maximum Likelihood detection algorithm, and evaluate its
performance using Monte Carlo simulations based on three criterias: image exposure, defect parameters and pixel
estimation. Preliminary results show that the Maximum likelihood detection algorithm is able to achieve higher
accuracy than the Bayes detection algorithm, with 90% perfect detection in images captured at long exposures
(>0.125s).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed and implemented a flexible image pre-processing concept to achieve an image sub-system for top-end
professional digital still camera applications that ensures the highest possible image quality. It supports high-speed
multiple-image acquisition and data processing for very-large resolution images, it reduces the design-in time for the
customers and it can be implemented economically for high-end applications with relatively smaller volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an effective approach for creating nice-looking photo images of scenes having high dynamic
range using a set of photos captured with exposure bracketing. Usually details of dark parts of the scene are preserved in
over-exposed shot, and details of brightly illuminated parts are visible in under-exposed photos. A proposed method
allows preservation of those details by first constructing gradient field, mapping it with special function and then
integrating it to restore lightness values using Poisson equation. Resulting image can be printed or displayed on
conventional displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Present paper generally relates to content-aware image resizing and image inscribing into particular predetermined areas.
The problem consists in transformation of the image to a new size with or without modification of aspect ratio in a
manner that preserves the recognizability and proportions of the important features of the image. Most close solutions
presented in prior art cover along with standard image linear scaling, including down-sampling and up-sampling, image
cropping, image retargeting, seam carving and some special image manipulations which similar to some kind of image
retouching. Present approach provides a method for digital image retargeting by means of erasing or addition of less
significant image pixels. The defined above retargeting approach can be easily used for image shrinking easily. However,
for image enlargement there are some limitations as a stretching artifact. History map with relaxation is introduced to
avoid such drawback and overcome some known limits of retargeting. In proposed approach means for important objects
preservation are taken into account. It allows significant improvement of resulting quality of retargeting. Retargeting
applications for different devices such as display, copier, facsimile and photo-printer are described as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.