Fluorescence lifetime imaging microscopy (FLIM) is a microscopic imaging technique to present an image of fluorophore lifetimes. It circumvents the problems of typical imaging methods such as intensity attenuation from depth since a lifetime is independent of the excitation intensity or fluorophore concentration. The lifetime is estimated from the time sequence of photon counts observed with signal-dependent noise, which has a Poisson distribution. Conventional methods usually estimate single or biexponential decay parameters. However, a lifetime component has a distribution or width, because the lifetime depends on macromolecular conformation or inhomogeneity. We present a novel algorithm based on a sparse representation which can estimate the distribution of lifetime. We verify the enhanced performance through simulations and experiments.
Bilateral filtering is a nonlinear technique that reduces noise from images while preserving strong image edges. Due to the nonlinear nature of bilateral filtering, it is difficult to analyze the performance of the filter. We derive a closed-form equation of bilateral filtering for flat regions which shows the relationship between noise reduction and filtering parameters. This work explicitly shows that noise reduction depends on the ratio of the range parameter to the noise standard deviation, which confirms reported empirical observations. The derived result is a significant contribution for the analysis of bilateral filters toward estimating the optimal parameters for minimum mean square error. We demonstrate that the theoretical analysis presented is consistent with simulations.
Color transforms are important methods in the analysis and processing of images. Image color transform and its inverse transform should be reversible for lossless image processing applications. However, color conversions are not reversible due to finite precision of the conversion coefficients. To overcome this limitation, reversible color transforms have been developed. Color integer transform requires multiplications of coefficients, which are implemented with shift and add operations in most cases. We propose to use canonical signed digit (CSD) representation of reversible color transform coefficients and exploitation of their common subexpressions to reduce the complexity of the hardware implementation significantly. We demonstrate roughly 50% reduction in computation with the proposed method.
KEYWORDS: Cameras, Databases, Chromium, Image processing, Digital cameras, Human vision and color perception, Image enhancement, Image quality, Data modeling, Digital imaging
We present a color correction algorithm for histogram equalized images captured by a digital camera. Current color
correction methods are based on human color perception of luminance and hue. However, these techniques do not
consider nonlinear camera characteristics, therefore the resulting color images show color distortions where brightness
modification is severe. We propose a new effective color correction method that depends on the camera brightness and
color curve. It utilizes the relationship of luminance and color variation of a camera used for image capture. We can
predict chrominance variation after luminance change by tracing the brightness-chrominance curve of the camera model.
Therefore the resulting image shows color that would have been obtained at different exposure using the same camera.
We verify that the processed images have natural color and that they are similar to images taken at different exposure
conditions. Moreover it is possible to apply the proposed method to software bracketing; we can change the exposure
condition of an image at post processing stage. All test results demonstrate that our method is accurate and useful in the
enhancement of a color of digital images.
With the wide use of digital multimedia equipment, exchange of image data has become prevalent. Images and video data are exchanged through personal computers, mobile phones, and personal digital assistants (PDAs). Since the resolutions of each image display are different, it is necessary to change the image size accordingly. An image-resizing algorithm in the discrete cosine transform (DCT) domain is known to be fast for a compressed image. Most of the DCT domain methods truncate the high-frequency components during image downsampling and they are assumed to be zero to upsample images. We estimate the high-frequency parts using the correlation between the low- and the high-frequency components, and compare the peak SNR (PSNR) performances. We verify that the use of correlation is the best linear estimation in the mean square error sense.
Many image compression standards such as JPEG, MPEG or H.263 are based on the discrete cosine transform (DCT), quantization, and Huffman coding. Quantization error is the major source of image quality degradation. The current dequantization method assumes the uniform distribution of DCT coefficients. Therefore the reconstruction value is the center of each quantization interval. However DCT coefficients are regarded to follow Laplacian probability density function (pdf). We derive an optimal reconstruction value in closed form assuming Laplacian pdf, and show the effect of the correction on image quality. We estimate the Laplacian pdf parameter for each DCT coefficient, and obtain a correction for reconstruction value from the proposed theoretical predictions. The corrected value depends on the Laplacian pdf parameter and the quantization step size Q. The effect of PSNR improvement due to the change in dequantization value is about 0.2 ~ 0.4 dB. We also analyze the reason for the limited improvements.
The accuracy of the iterative closest point (ICP) algorithm, which is widely employed in image registration, depends on the complexity of the shape of the object under registration. Objects with complex features yield higher reliability in estimating registration parameters. For objects with rotation symmetry, a cylinder for example, rotation along the center axis can not be distinguished. We derive the sensitivity of the rotation error of the ICP algorithm from the curvature of the error function near the minimum error position. We approximate the defined error function to a second order polynomial and show that the coefficient of the second-order term is related to the reliability of the estimated rotation angle. Also the coefficient is related to the shape of the object. In the known correspondence case, the reliability can be expressed by the second moment of the input image. Finally, we apply the sensitivity formula to a simple synthetic object and ellipses, and verify that the predicted orientation variance of the ICP algorithm is in good agreement with computer simulations.
Determination of a target position requires the obtaining of x-ray source position and image plane orientation in 3D space. With those parameters target position is determined from the intersection of two lines joining the x-ray source and the target image point in space. The error of target localization is represented with an analytically derived covariance matrix. The covariance matrix from this analysis is verified to be in good agreement with simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.