Depth from defocus aims to estimate scene depth from two or more photos captured with differing camera parameters,
such as lens aperture or focus, by characterizing the difference in image blur. In the absence of noise, the ratio of Fourier
transforms of two corresponding image patches captured under differing focus conditions reduces to the ratio of the
optical transfer functions, since the contribution from the scene cancels. For a focus or aperture bracket, the shape of this
spectral ratio depends on object depth. Imaging noise complicates matters, introducing biases that vary with object
texture, making extraction of a reliable depth value from the spectral ratio difficult. We propose taking the mean of the
complex valued spectral ratio over an image tile as a depth measure. This has the advantage of cancelling much of the
effect of noise and significantly reduces depth bias compared to characterizing only the modulus of the spectral ratio.
This method is fast to calculate and we do not need to assume any shape for the optical transfer function, such as a
Gaussian approximation. Experiments with real world photographic imaging geometries show our method produces
depth maps with greater tolerance to varying object texture than several previous depth from defocus methods.
We present a new method for accurately determining the best focus position of a camera lens in the context of image
quality evaluation and modulation transfer function (MTF) measurement. Our method makes use of the “live preview”
function of digital cameras to image a test chart containing spatially and rotationally invariant alignment patterns. The
patterns can be located to sub-pixel accuracy even under defocus using the technique of blur-invariant phase correlation,
which leads to an absolute measure of focus position, independent of any backlash in the lens mechanism. We describe
an efficient closed feedback loop algorithm which makes use of this to drive the lens rapidly to best focus. This method
achieves the peak focus position to within a single step of the focus drive motor, typically allowing the peak focus MTF
to be measured to within 1.4% RMS. The mean time taken to find the peak focus position and drive the focus motor back
to that position ready for a comprehensive test exposure is 11.7 seconds, with maximum time 26 seconds, across a
variety of lenses of varying focal lengths.
We present a novel method for accurately measuring the optical transfer function (OTF) of a camera lens by digitally
imaging a tartan test pattern containing sinusoidal functions with multiple frequencies and orientations. The tartan
pattern can be tuned to optimize the measurement accuracy for an adjustable set of sparse spatial frequencies. The
measurement method is designed to be accurate, reliable, and fast in a wide range of measurement conditions, including
uncontrolled lighting. We describe the design of the tartan pattern and the algorithm for estimating the OTF accurately
from a captured digital image. Simulation results show that the tartan method has significantly better accuracy for
measuring the modulus of the OTF (the modulation transfer function, or MTF) than the ISO 12233 standard slanted-edge
method, especially at high spatial frequencies. With 1% simulated imaging noise, the root mean square (RMS) error of
the tartan method is on average 5 times smaller than the RMS error of the slanted-edge method. Experiments with a
printed tartan chart show good agreement (0.05 RMS) with MTFs measured using the slanted-edge method and that, like
the slanted-edge method, our method is tolerant to wide variations in illumination conditions.
The Nomarski differential interference contrast (DIC) mode is commonly used for imaging translucent biological specimens and it exhibits several major advantages over other phase contrast techniques, including a boost in high spatial frequencies in the region of focus. However, DIC images (unlike confocal) are limited by the presence of low spatial frequency blur and also by a differential shading gradient at feature boundaries, which make normal confocal visualization techniques unsuitable for feature extraction or for 3D volume rendering of focus- series datasets. To remedy this problem, we employ a neural network technique based on competitive learning, known as Kohonen's self-organizing feature map (SOFM), to perform segmentation, using a collection of statistics (know as features) defining the image. Our past investigation showed that standard features such as the localized mean and variance of pixel intensities provided reasonable extraction of objects such asmitotic chromosomes, but surface detail was only moderately resolved. In this current work, local energy is investigated as an alternative image statistic based on phase congruency in the image. This, along with different combinations of other image statistics, is applied in a SOFM, producing 3D images exhibiting vast improvement in the level of detail and clearly isolating the chromosomes from the background.
Imaging thick specimens in 3D transmission confocal modes presents two key problems. The first problem is variable aberrations introduced by changes in refractive index. The second problem is revealed when visualizing acquired data, where thick 3D datasets are difficult to interpret. In this paper we present our emerging solutions to these problems. Aberrations can be classified as simple tip-tilt deflection of the beam, or more complicated higher order aberrations. We discuss our results which demonstrate successful on-the- fly detection and correction for tip-tilt. For detecting higher order aberrations, we have chosen to investigate the wavefront curvature sensing technique. The second problem of rendering thick 3D datasets can be solved by extracting features of interest from the background. Simple intensity thresholding is not sufficient for complex biological specimens. And image processing in only 2D neglects any 3D structure. Use of Kohonen's self-organizing map neural network in 3D results in clear segmentation of features for sample chromosome specimens.
KEYWORDS: 3D image processing, Digital image correlation, Confocal microscopy, Spatial frequencies, Microscopes, Image processing, Visualization, 3D visualizations, Digital signal processing, Fourier transforms
We have developed digital 3D Fourier transform methods for comparing the 3D spatial frequency content and hence the axial and transverse resolution of confocal versus conventional microscope images. In particular, we have utilized these techniques to evaluate the performance of our recently-developed confocal transmission microscope for bright field and Nomarski DIC imaging. We have also found that Fourier methods, such as the Hibert transform, can be successfully employed to overcome the difficulty of visualizing differentially-shaded phase objects, in 3D, that have been acquired using transmission DIC optics.
We have designed and constructed an experimental confocal specimen-scanning microscope which has the capability of producing high resolution 3D images in a variety of optical modes, many of which are not currently available on commercial confocal microscopes. The transmission Nomarski differential interference contrast mode is particularly interesting because it can be utilized to image small changes in refractive index within complex biological specimens which are transparent in standard brightfield. The three-color reflection configuration can produce a color 3D image, which means that stained or pigmented objects will be similar in appearance to images obtained from conventional white light microscopes which makes them more recognizable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.