PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6502, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, high-end mobile camera phones are spreading rapidly all over the world. In addition, the demands for the
optical zoom as new function of the mobile camera phones are very intense. However, it has been very difficult for the
module manufacturers to propose an ultra compact zoom lens unit which can be mass-producible and fit into the size of
mobile phones. The following three key technologies are essential to mount optical zoom lens units onto mobile phones
maintaining the portability of them. Those are a) optical design method which can achieve ultra compact size and mass
productivity by utilizing plural aspherical glass-mold lenses and controlling decentering sensitivities, b) precise lens
alignment system to compensate the field tilt caused by errors in the fabrication processes such as glass lens molding and
lens assembling, and c) long-stroke compact lens actuators based on our unique SIDMs (Smooth Impact Drive
Mechanisms) technology. By integrating these technologies, we have developed 3 times optical zoom lens unit suitable
for 3 to 5 mega-pixel cameras. The size of our lens unit is one of the world's smallest. The technologies have shown the
new orientation in the future of high-end mobile camera phones. In this paper, we show the foregoing three technologies
and our next generation zoom lens unit under development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the demanding size and cost constraints of camera phones, the mobile imaging industry needs to address several key challenges in order to achieve the quality of a digital still camera. Minimizing camera-motion introduced image blur is one of them. Film photographers have long used a rule-of-thumb that a hand held 35mm format film camera should have an exposure in seconds that is not longer than the inverse of the focal length in millimeters. Due to the lack of scientific studies on camera-motion, it is still an open question how to generalize this rule-of-thumb to digital still cameras as well as camera phones. In this paper, we first propose a generalized rule-of-thumb with the original rule-of-thumb as a special case when camera-motion can be approximated by a linear motion at 1.667 °/sec. We then use a gyroscope-based system to measure camera-motion patterns for two camera phones (one held with one hand and the other held in two hands) and one digital still camera. The results show that effective camera-motion function can be approximated very well by a linear function for exposure durations less than 100ms. While the effective camera-motion speed for camera phones (5.95 °/sec and 4.39 °/sec respectively) is significantly higher than that of digital still cameras (2.18 °/sec), it was found that holding a camera phone with two hands while taking pictures does reduce the amount of camera motion. It was also found that camera-motion not only varies significantly across subjects but also across captures for the same subject. Since camera phones have significantly higher motion and longer exposure durations than 35mm format film cameras and most digital still cameras, it is expected that many of the pictures taken by camera phones today will not meet the sharpness criteria used in 35mm film print. The mobile imaging industry is aggressively pursuing a smaller and smaller pixel size in order to meet the digital still camera's performance in terms of total pixels while retaining the small size needed for the mobile industry. This makes it increasingly more important to address the camera-motion challenge associated with smaller pixel size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A logarithmic response pixel (often referred as 3T pixel) captures wide dynamic range images at pixel level. Since the
3T pixel captures images without the need of photo charge integration in principle, a continuous image capturing
operation mode was widely used. Image lag was an issue of the operation. The image lag can be removed by initializing
the sensing node (that is, by applying a reset signal to that node) since it occurs when image residue reminds in the
parasitic capacitance of the pixel. Once the pixel initialization step is introduced, the pixel response becomes time
variant. However this does not mean a loss of pixel sensitivity at low lighting condition. A high sensitivity can be
realized by minimizing the capacitance of the pixel sensing node. In this paper, the reset operation associated pixel
output response is discussed. The pixel output response analytical solution and easy to handle approximated expression
will be derived. The analytical expression contributes to the 3T pixel based temperature insensitive image sensor design
and fixed pattern noise correction (FPC) algorithm development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The leakage characteristics of the buried photodiode structure have been investigated in direct color CMOS image
sensor with a stacked photodiode (PD) structure tailored for detecting red, green and blue light. Image quality was
investigated showing that the blue photodiode has surface related effects while the red and green PDs do not. From
these experiments, it is found that the activation energy of PDs display dependence on area, periphery, and corners and
the corner component dominants. Leakage characteristic of PDs show similar behavior to normal n+pwell diode of
similar structure. Also the separate contribution from the area, periphery and corners, and their relationship to STI was
analyzed by TCAD.
For the first time, we have analyzed the vertical buried photodiode structure and found that corner components on red
and green PD can be source of leakage current. We also found that surface contact of blue PD can be a noise source,
reducing image quality. Therefore, to maintain high image quality, the blue photo diode of a CIS has to be designed as
a buried structure and the connections to the buried red and green PDs has to be free from STI sidewall contact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At EI2006, we proposed the CMOS image sensor, which was overlaid with organic photoconductive layers in
order to incorporate in it large light-capturing ability of a color film owing to its multiple-layer structure, and
demonstrated the pictures taken by the trial product of the proposed CMOS image sensor overlaid with an organic layer
having green sensitivity. In this study, we have tried to get the optimized spectral sensitivity for the proposed CMOS
image sensor by means of the simulation to minimize the color difference between the original Macbeth chart and its
reproduction with the spectral sensitivity of the sensor as a parameter. As a result, it has been confirmed that the
proposed CMOS image sensor with multiple-layer structure possesses high potential capability in terms of imagecapturing
efficiency when it is provided with the optimized spectral sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate the relationship between matrixing methods, the number of filters adopted and the
size of the color gamut of a digital camera. The color gamut is estimated using a method based on the inversion of
the processing pipeline of the imaging device. Different matrixing methods are considered, including an original
method developed by the authors. For the selection of a hypothetical forth filter, three different quality measures
have been implemented. Experimental results are reported and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For a given noise at the photosite level and a given output color space, the spectral sensitivities of a sensor
constrain the color processing and therefore impact the level of noise in the output. In particular, this noise may
be very different from the usually documented photosite noise. A key phenomenon is the appearance of strong
correlations between channels which makes individual channel measures (including the classical signal-to-noise
ratio, SNR) misleading. We evaluate existing chains and isolated sensors by several indicators including the
previously developed color sensitivity. We finally apply this approach to the understanding of good spectral
sensitivities by considering hypothetical spectral sensitivities and simulating their performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new algorithm that performs demosaicing and super-resolution jointly from a set of raw images
sampled with a color filter array. Such a combined approach allows us to compute the alignment parameters between the images on the raw camera data before interpolation artifacts are introduced. After image registration, a high resolution color image is reconstructed at once using the full set of images. For this, we use normalized
convolution, an image interpolation method from a nonuniform set of samples. Our algorithm is tested and
compared to other approaches in simulations and practical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are numerous passive contrast sensing autofocus algorithms that are well documented in literature, but some aspects of their comparative performance have not been widely researched. This study explores the relative merits of a set of autofocus algorithms via examining them against a variety of scene conditions. We create a statistics engine that considers a scene taken through a range of focal values and then computes the best focal position using each autofocus algorithm. The process is repeated across a survey of test scenes containing different representative conditions. The results are assessed against focal positions which are determined by manually focusing the scenes. Through examining these results, we then derive conclusions about the relative merits of each autofocus algorithm with respect to the criteria accuracy and unimodality. Our study concludes that the basic 2D spatial gradient measurement approaches yield the best autofocus results in terms of accuracy and unimodality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high-performance focus measure is one of the key components in any autofocus
system based on digital image processing. More than a dozen of focus measures have
been proposed and evaluated in the literature, yet there have be no comprehensive
evaluations that include most of them. The purpose of the current study is to evaluate
and compare the performance of ten focus measures using Monte Carlo simulations,
run on a self-built scalable inhomogeneous computer cluster with distributed
computing capacity. From the perspective of a general framework for focus measure
evaluations, we calculate the true point spread functions (PSFs) from aberrations
represented by OSA standard Zernike polynomials using fast Fourier transform. For
each run, a range of defocus levels are generated, the PSF for each defocus level is
convoluted with an original image, and a certain amount of noise is added to the
resulting defocused image. Each focus measure is applied to all the blurred images to
obtain a focus measure curve. The procedure is repeated on a few representative
images for different types and levels of noise (Gaussian, salt & pepper, and speckle).
The performance of the ten focus measures is compared in terms of monotonicity,
unimodality, defocus sensitivity, noise sensitivity, effective range, computational
efficiency and variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we consider six methods for automatic white balance available in the literature. The idea investigated
does not rely on a single method, but instead considers a consensus decision that takes into account
the compendium of the responses of all the considered algorithms. Combining strategies are then proposed and
tested both on synthetic and multispectral images, extracted from well known databases. The multispectral
images are processed using a digital camera simulator developed by Stanford University. All the results are
evaluated using the Wilcoxon Sign Test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a digital camera, several factors cause signal-dependency of additive noise. Many denoising methods have been
proposed, but unfortunately most of them do not work well for the actual signal-dependent noise. To solve the problem
of removing the signal-dependent noise of a digital camera, we present a denoising approach via the nonlinear imagedecomposition.
In the nonlinear decomposition-and-denoising approach, at the first nonlinear image-decomposition
stage, multiplicative image-decomposition is performed, and a noisy image is represented as a product of its two
components so that its structural component corresponding to a cartoon approximation of the noisy image may not be
corrupted by the noise and its texture component may collect almost all the noise. At the successive nonlinear denoising
stage, intensity of the separated structural component is utilized instead of the unknown true signal value, to adapt the
soft-thresholding-type denoising manipulation of the texture component to the signal dependency of the noise. At the
final image-synthesis stage, the separated structure component is combined with the denoised texture component, and
thus a sharpness-improved denoised image is reproduced. The nonlinear decomposition-and-denoising approach can
selectively remove the signal-dependent noise of a digital camera without not only blurring sharp edges but also
destroying visually important textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DCT based compression engines1,2 are well known to introduce color artifacts on the processed input frames, in
particular for low bit rates. In video standards, like MPEG-23, MPEG-44, H2635, and in still picture standards, like
JPEG6,7, blocking and ringing distortions are understood and considered, so different approaches have been developed to
reduce these effects8,9,10,11. On the other side, other kinds of phenomenon have not been deeply investigated. Among
them, the chromatic color bleeding effects has only recently received proper attention12,13. The scope of this paper is to
propose and describe an innovative and powerful algorithm to overcome this kind of color artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a digital camera, its output image is sometimes corrupted by additive noise heavily and its noisy image is often
compressed with the JPEG encoder. When the coding rate of the JPEG encoder is not high enough, in a JPEG-decoded
image there appear noticeable artifacts such as the blocking, the ringing, and the false color artifacts. In the high ISOsensitivity
case, even if the coding rate is very high, camera's noise will produce noticeably annoying artifacts in a
JPEG-decoded image. This paper presents a restoration-type decoding approach that recovers a quality-improved image
from the JPEG-compressed data, while not only suppressing the occurrence of the coding artifacts particular to the
JPEG compression but also removing the camera's noise to some extent. This decoding approach is a kind of superresolution
image-restoration approach based on the TV (Total Variation) regularization; to reduce the ringing artifacts
near sharp edges it selectively restores the DCT coefficients truncated by the JPEG compression, whereas in an
originally smooth image region it flattens unnecessary signal variations to eliminate the blocking artifacts and the
camera's noise. Extending the standard ROF (Rudin-Osher-Fetami) framework of the TV image restoration, in this
paper we construct the super-resolution approach to the JPEG decoding. By introducing the JPEG-compressed data into
the fidelity term of the energy functional and adopting a nonlinear cost function constrained by the JPEG-compressed
data softly, we define a new energy functional whose minimization gives the super-resolution JPEG decoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As CMOS imaging technology advances, sensor to sensor differences increase, creating an increasing need for
individual, per sensor, calibration. Traditionally, the cell-phone market has a low tolerance for complex per unit
calibration. This paper proposes an algorithm that eliminates the need for a complex test environment and does not
require a manufacturing based calibration on a per phone basis. The algorithm locates "bad pixels", pixels with light
response characteristics out of the mean range of the values specified by the manufacturer in terms of light response. It
uses several images captured from a sensor without using a mechanical shutter or predefined scenes. The implementation
that follows uses two blocks: a dynamic detection block (local area based) and a static correction block (location table
based). The dynamic block fills the location table of the static block using clustering techniques. The result of the
algorithm is a list of coordinates containing the location of the found 'bad pixels'. An example is given of how this
method can be applied to several different cell-phone CMOS sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Changing the lens of a DSLR camera has the drawback of allowing small dust particles from the environment to be attracted onto the sensors' surface. As a result, unwanted blemishes may compromise the normally high quality of photographs. The particles can be removed by physically cleaning the sensor. A second, more general approach is to locate and remove the blemishes from digital photos by employing image processing algorithms.
This paper presents a model that allows computing the physical appearance (actual size, shape, position and transparency) of blemishes in a photograph as a function of camera settings.
In order to remove these blemishes with sufficient accuracy, an initial algorithm calibration must be performed for any given pair camera-lens. The purpose of this step is to estimate some parameters of the model that are not readily available. To achieve this, a set of "calibration images" must be carefully taken under conditions that will allow the blemishes to become easily identifiable. Then, based on the metadata stored in the photo's header, the actual appearance of the blemishes in the given photograph is computed and used in the automatic removing algorithm. Computing formulas and results of our experiments are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a digital imaging system, the Image Signal Processing (ISP) pipeline may be called on to identify and hide defective
pixels in the image sensor. Often filters are designed and implemented to accomplish these tasks without considering the
cost in memory or the effect on actual images. We have created a simulation system which uses an inverse ISP model to
add defect pixels to raw sensor data. The simulation includes lens blur, inverse gamma, additive white noise, and
mosaic. Defect pixels are added to the simulated raw image, which is then processed by various defect pixel correction
algorithms. The end result is compared against the original simulated raw data to measure the effect of the added defects
and defect pixel correction. We have implemented a bounding min-max filter as our defect pixel correction algorithm.
The simulations show that the choice of kernel size and other parameters depends not only on memory constraints, but
also on the defect pixel rate. At high defect pixel rates, algorithms with more aggressive defect correction are more
effective, but also result in higher accidental degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MEMS technology uses photolithography and etching of silicon wafers to enable mechanical structures with less than 1
&mgr;m tolerance, important for the miniaturization of imaging systems. In this paper, we present the first silicon MEMS
digital auto-focus camera for use in cell phones with a focus range of 10 cm to infinity. At the heart of the new silicon
MEMS digital camera, a simple and low-cost electromagnetic actuator impels a silicon MEMS motion control stage on
which a lens is mounted. The silicon stage ensures precise alignment of the lens with respect to the imager, and enables
precision motion of the lens over a range of 300 &mgr;m with < 5 &mgr;m hysteresis and < 2 &mgr;m repeatability. Settling time is <
15 ms for 200 &mgr;m step, and < 5ms for 20 &mgr;m step enabling AF within 0.36 sec at 30 fps. The precise motion allows
COTS optics to maintain MTF > 0.8 at 20 cy/mm up to 80% field over the full range of motion. Accelerated lifetime
testing has shown that the alignment and precision of motion is maintained after 8,000 g shocks, thermal cycling from - 40 C to 85 C, and operation over 20 million cycles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a
DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the
newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the
number of mega-pixels in a camera and not to consider the overall performance of the imaging system,
including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This
paper will provide a systematic method to characterize the physical requirements of an imaging sensor and
supporting system components based on the desired usage. The analysis is based on two software
programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic
range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics
(including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current,
quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square
centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where
possible, these results will be compared to imaging system currently on the market.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main quality requirements for a digital still camera are color capturing accuracy, low noise level, and quantum
efficiency. Different consumers assign different priorities to the listed parameters, and camera designers need clearly
formulated methods for their evaluation. While there are procedures providing noise level and quantum efficiency
estimation, there are no effective means for color capturing accuracy estimation. Introduced in this paper criterion allows
to fill this gap.
Luther-Ives condition for correct color reproduction system became known in the beginning of the last century.
However, since no detector system satisfies Luther-Ives condition, there are always stimuli that are distinctly different
for an observer, but which detectors are unable to distinguish. To estimate conformity of a detector set with Luther-Ives
condition and calculate a measure of discrepancy, an angle between detector sensor sensitivity and Cohen's Fundamental
Color Space may be used.
In this paper, the divergence angle is calculated for some typical CCD sensors and a demonstration provided on how this
angle might be reduced with a corrective filter. In addition, it is shown that with a specific corrective filter Foveon
sensors turn into a detector system with a good Luther-Ives condition compliance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The resolution of a digital camera is defined as its ability to reproduce fine detail in an image. To test this ability
methods like the Slanted Edge SFR measurement developed by Burns and Williams1 and standardized in ISO 122332
are used. Since this method is - in terms of resolution measurements - only applicable to unsharpened and
uncompressed data an additional method described in this paper had to be developed.
This method is based on a Sinusoidal Siemens Star which is evaluated on a radius by radius or frequency by frequency
basis. For the evaluation a freely available runtime program developed in MATLAB is used which creates the MTF of a
camera system as the contrast over the frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image stabilization in digital imaging continuously gains in importance. This fact is responsible for the increasing interest in the benefits of the stabilizing systems. The existing standards provide neither binding procedures nor recommendations for the evaluation. This paper describes the development and implementation of a test setup and a test procedure for qualitative analysis of image stabilizing systems under reproducible, realistic conditions. The basis for these conditions is provided by the studies of physiological properties of human handshake and the functionality of modern stabilizing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Any color image editing software has Brightness, Contrast, and Saturation controls. However, because it usually imitates
corresponding adjusting knobs of a Color TV, and thus, corresponds to mid 20th century scope of engineering, adjusting
one of the parameters affects all three parameters, and modification of Brightness or Contrast does not preserve
chromatic coordinates. A person should be very experienced with the sequential control operations in order to get a
result equivalent to a simple expocorrection.
A set of new generation algorithms described in this paper is free from the above-mentioned defects and includes:
Brightness and Contrast editing which does not affect chromatic coordinates; Local Contrast editing that causes only
minor modification of Global Dynamic Range; Global Dynamic Range modification which affects neither chromatic
coordinates, nor Local Dynamic Range; and Saturation modification which affects neither Brightness, nor Hue.
The efficiency of color image editing software depends on the choice of a basic CCS (Color Coordinate System). A CCS
that is effective for one editing procedure might be less effective for another. This paper presents a set of correlated
CCSs with a specification of their preferable area of application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed an experimental single chip color HDTV video image acquisition system with 8M-pixel CMOS
image sensor. The imager has 3840 (H) × 2160 (V) effective pixels and built-in analog-to-digital converters, and its
frame rate is 60-fps with progressive scanning. The MTF characteristic we measured with this system on luminance
signal in horizontal direction was about 45% on 800 TV lines. This MTF was better than conventional three-pickup
broadcasting cameras, therefore the enhancement gain (the "enhancement area" in MTF) of the 8M single-chip HDTV
system was about a half of the three-pickup cameras. We also measured the color characteristics and corrected the color
gamut using matrix gain on primary colors. We set the color correction target similar to that of three-pickup color
cameras in order to use multiple cameras to shoot for broadcasting, where all cameras are controlled in the same manner.
The color error between the single-chip system and three-pickup cameras after the correction became 2.7, which could
be useful in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to camera module miniaturization, the pixel area of the digital sensors decreases which decreases also the
signal to noise ratio in the captured images. As a consequence, image de-noising is still an important topic
in digital image processing field. In this paper we address the problem of image de-noising using the nonlocal
means algorithm. This method has excellent de-noising properties but at the expense of increasing the
computational complexity. We propose here a novel approach that provides similar filtering capabilities with
much less computational effort and shorter processing time. Our proposed algorithm is compared with the nonlocal
means algorithm and with another fast implementation, recently reported, in terms of processing time and
noise reduction capability (from both visual impression and mean squared error points of view). The comparative
results are presented for artificially degraded images and also for images obtained with a camera phone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital images captured from CMOS image sensors suffer Gaussian noise and impulsive noise. To efficiently reduce the
noise in Image Signal Processor (ISP), we analyze noise feature for imaging pipeline of ISP where noise reduction
algorithm is performed. The Gaussian noise reduction and impulsive noise reduction method are proposed for proper
ISP implementation in Bayer domain. The proposed method takes advantage of the analyzed noise feature to calculate
noise reduction filter coefficients. Thus, noise is adaptively reduced according to the scene environment. Since noise is
amplified and characteristic of noise varies while the image sensor signal undergoes several image processing steps, it is
better to remove noise in earlier stage on imaging pipeline of ISP. Thus, noise reduction is carried out in Bayer domain
on imaging pipeline of ISP. The method is tested on imaging pipeline of ISP and images captured from Samsung 2M
CMOS image sensor test module. The experimental results show that the proposed method removes noise while
effectively preserves edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional RGB color filter array (CFA) in single-chip color cameras greatly reduce light intensity, and therefore
limits the ISO speed of cameras. A novel CFA and an algorithm of forming color images are proposed. 75% of the CFA
are transparent elements, and the remaining 25% are repeated color filter blocks. A compact arrangement of color filter
elements in each block helps to reduce color artifacts. Such an arrangement features a higher sampling rate of luminance
and a lower sampling rate of chrominance.
Black-and-white images (BI) with high resolution can be acquired from the transparent elements, and color images (CI)
with low resolution from the color filter blocks. To generate output color images (OI), the CI in RGB format is
transformed into CIE Lab space, and the luminance components are replaced with the high-resolution BI.
Based on a principle compatible to JPEG format, the visual quality of OI is satisfactory. Simulation was conducted using
raw images acquired with a Canon 20D camera. Results show the potential of the CFA in making digital cameras with
high ISO speed. Applications can be security day/night cameras and cell phone cameras that are able to capture images
with low noise under dim light levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many luminance measuring tasks require a luminance distribution of the total viewing field. The approach of imageresolving
luminance measurement, which could benefit from the continual development of position-resolving radiation
detectors, represents a simplification of such measuring tasks.
Luminance measure cameras already exist which are specially manufactured for measuring tasks with very high
requirements. Due to high-precision solutions these cameras are very expensive and are not commercially viable for
many image-resolving measuring tasks. Therefore, it is desirable to measure luminance with digital still cameras which
are freely available at reasonable prices.
This paper presents a method for the usage of digital still cameras as luminance meters independent of the exposure
settings. A calibration of the camera is performed with the help of an OECF (opto-electronic conversion function)
measurement and the luminance is calculated with the camera's digital RGB output values. The test method and
computation of the luminance value irrespective of exposure variations is described. The error sources which influence
the result of the luminance measurement are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach to motion deblurring based on exploiting the information available in two differently
exposed images of the same scene. Besides the normal-exposed image of the scene, we assume that a short
exposed image is also available. Due to their different exposures the two images are degraded differently: the
short exposed image is affected by noise, whereas the normal-exposed image could be affected by motion blur.
The method presented in this paper estimates the motion blur point spread function (PSF) that models the
degradation of the normal-exposed image, following to recover the image of the scene by deconvolution. The
main processing steps detailed in the paper are: image registration and motion blur PSF estimation. Image
registration operation includes a preprocessing step meant to cancel the differences between the two images
due to their different exposures. Next, the registration parameters are estimated by matching the preprocessed
images based on an image based registration approach. Motion blur PSF estimation is carried out by exploiting
the difference between the degradation models of the two images, as well as certain prior assumptions about a
typical motion blur PSF. Experiments and comparisons are presented in order to validate the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A template matching approach is used to demosaic (reconstruct) a full color image from the sparse pixel data captured
by a CMOS imager. The proposed method is based on the Nevatia-Babu template-based linear feature extraction
algorithm. This approach provides the color accuracy of gradient based algorithms yet reaps the benefit of regularity of
processing of bilinear interpolation. In this paper we describe the algorithm (gradient estimation and color interpolation),
compare results to other approaches, and present a hardware implementation. The consideration of hardware
implementation is especially important as CMOS imagers find their way into low cost devices such as cell phones and
other novelty camera applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The color characterization of professional imaging devices typically involves the capture of a reference color target under the scene-specific lighting conditions and the use of dedicated profiling software. However, the limited set of color patches on the target may not adequately represent the reflection spectra found in the scene. We present a solution developed in collaboration with a camera manufacturer for the automatic color characterization of the sensors without the need of a physical color target. The optimal color transforms are computed based on the individually measured sensor spectral sensitivities, computer generated sets of color spectra forming a virtual characterization target and a mathematical model of the camera. The use of a virtual target enables the optimization of the color transform for specific image capturing situations by selective generation of the reflection spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although solid-state image sensors are known to develop defects in the field, little information is available about the
nature, quantity or development rate of these defects. We report on and algorithm and calibration tests, which
confirmed the existence of significant quantities of in-field defects in 4 out of 5 high-end digital cameras. Standard hot
pixels were identified in all 4 cameras. Stuck hot pixels, which have not been described previously, were identified in 2
cameras. Previously, hot-pixels were thought to have no impact at short exposure durations, but the large offset of stuck
hot pixels will degrade almost any image and cannot be ignored. Fully-stuck and abnormal sensitivity defects were not
found. Spatial investigation found no clustering. We tracked hot pixel growth over the lifetime of one camera, using
only normal photographs. We show that defects develop continually over the lifetime of the sensor, starting within
several months of first use, and do not heal over time. Our success in tracing the history of each defect confirms the
feasibility of using automatic defect identification to analyze defect response and growth characteristics in a multitude
of cameras already in the field, without performing additional experiments or requiring physical access to the cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the issue of wide dynamic range imaging, some adjustments are needed to display an image with wider dynamic
range than that of displays. Because the dynamic range of existing displays is limited to 10 from 8 bits for each color, the
image quality degrades when the characteristic is not suitable for the objects. One of our smart image sensors can control
the integration time of each pixel. In the sensor, the intermediate photodiode value is compared with arbitrary threshold
at arbitrary timing and reset depending on the result. We call this "judgment and reset process". The characteristic for
wide dynamic range imaging is decided by thresholds and timings. So optimization of the parameters adapted to the
brightness distribution of objects and the new sensor which realizes the proposed method are reported. Before the first
judgment, the PD values are read out and passed to the external circuit in order to estimate the rough histogram of the
image at a frame end. In the histogram, successive pixels are recognized as the detected block. Quota of output values in
the characteristic is mainly decided by the block width. In consequence, the dynamic range is widened and the contrast is
dramatically improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many modern CMOS imagers employing pixel arrays the optical integration time is controlled by the method known as
rolling shutter. This integration technique, combined with fluorescent illuminators exhibiting an alternating light intensity,
causes spatial flicker in images varying through the sequence. This flicker can be avoided when the integration time of
the imager is adjusted to a multiple of the flicker period. Since the flicker frequency can vary upon the local AC power
frequency, a classification must be performed beforehand. This is either performed utilizing an additional illumination
intensity detector or, in the case we focus on, restricting to image information only. In this paper we review the state of the
art techniques of flicker detection and frequency classification, and propose two robust classification methods based on a
clear mathematical model of the illumination flicker problem. Finally we present another approach for compensating for
flicker in single images suffering from these artifacts by inversing the flicker model function. Therefore, the flicker phase,
amplitude and frequency are to be adjusted adaptively. This however compensates for the fact, that the shutter width is no
longer limited to a multiple of the flicker period. We present our simulation results with synthesized image series as well
as with real captured sequences under different illumination frequencies, whereas our approaches classify robustly in most
imaging situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.