KEYWORDS: High dynamic range imaging, Photography, Image enhancement, Tablets, Cameras, Digital cameras, Cell phones, Digital photography, Sensors, Image processing
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is
typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject’s face while preserving the original background.
The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
The dynamic range of an imager is determined by the ratio of the pixel well capacity to the noise floor. As the scene
dynamic range becomes larger than the imager dynamic range, the choices are to saturate some parts of the scene or
“bury” others in noise. In this paper we propose an algorithm that produces high dynamic range images by “stacking”
sequentially captured frames which reduces the noise and creates additional bits. The frame stacking is done by frame
alignment subject to a projective transform and temporal anisotropic diffusion. The noise sources contributing to the
noise floor are the sensor heat noise, the quantization noise, and the sensor fixed pattern noise. We demonstrate that by stacking images the quantization and heat noise are reduced and the decrease is limited only by the fixed pattern noise. As the noise is reduced, the resulting cleaner image enables the use of adaptive tone mapping algorithms which render HDR images in an 8-bit container without significant noise increase.
Stereo metrology involves obtaining spatial estimates of an object’s length or perimeter using the disparity between
boundary points. True 3D scene information is required to extract length measurements of an object’s projection onto
the 2D image plane. In stereo vision the disparity measurement is highly sensitive to object distance, baseline distance,
calibration errors, and relative movement of the left and right demarcation points between successive frames. Therefore
a tracking filter is necessary to reduce position error and improve the accuracy of the length measurement to a useful
level. A Cartesian coordinate extended Kalman (EKF) filter is designed based on the canonical equations of stereo
vision. This filter represents a simple reference design that has not seen much exposure in the literature. A second filter
formulated in a modified sensor-disparity (DS) coordinate system is also presented and shown to exhibit lower errors
during a simulated experiment.
Fisher's linear discriminant analysis (LDA) is traditionally used in statistics and pattern recognition to linearlyproject
high-dimensional observations from two or more classes onto a low-dimensional feature space before
classification. The computational complexity of the linear feature extraction method increases linearly with
dimensionality of the observation samples. For high-dimensional signals, high computational cost can render the
method unsuitable for implementation in real time.
In this paper, we propose sparse Fisher's linear discriminant analysis, which allows one to search for lowdimensional
subspaces, spanned by sparse discriminant vectors, in the high-dimensional space of observation
samples from two classes. The sparsity constraints on the space of potential discriminant feature vectors are
enforced using the sparse matrix transform (SMT) framework, proposed recently for regularized covariance
estimation. Classical Fisher's LDA is a special case of sparse Fisher's LDA when the sparsity constraints on the
feature vectors in the estimation algorithm are fully relaxed.
The number of non-zero components in a discriminant direction estimated using our proposed discriminant
analysis technique is tunable; this feature can be used to control the compromise between computational complexity
and accuracy of the eventual classification algorithm. The experimental results discussed in the manuscript
demonstrate the effectiveness of the new method for low-complexity data-classification applications.
Conventional electrophotographic printers tend to produce Moire artifacts when used for printing images scanned from
printed material such as books and magazines. We propose a novel descreening algorithm that removes a wide range of
Moire-causing screen frequencies in a scanned document while preserving image sharpness and edge detail. We develop
two non-linear noise removal algorithms, resolution synthesis denoising (RSD) and modified SUSAN filtering, and use
the combination of the two to achieve a robust descreening performance. The RSD predictor is based on a stochastic
image model whose parameters are optimized in an offline training algorithm using pairs of spatially registered original
and scanned images obtained from real scanners and printers. The RSD algorithm works by classifying the local window
around the current pixel in the scanned image and then applying linear filters optimized for the selected classes. The
modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and
produces the final output of the descreening algorithm.
The performance of the descreening algorithm was evaluated on a variety of test documents obtained from different
printing sources. The experimental results demonstrate that the algorithm suppresses halftone noise without deteriorating
text and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.