In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various
physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML
(Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by:
incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including
veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall
interface. We describe our model, present results from the Monte Carlo modeling, and compare these results
with those obtained with other Monte Carlo methods.
Interferometric imaging has the potential to extend the usefulness of optical microscopes by encoding small phase shifts
that reveal information about topology and materials. At the Oak Ridge National Laboratory (ORNL), we have
developed an optical Spatial Heterodyne Interferometry (SHI) method that captures reflection images containing both
phase and amplitude information at a high rate of speed. By measuring the phase of a wavefront reflected off or
transmitted through a surface, the relative surface heights and some materials properties can be measured. In this paper
we briefly review our historical application of SHI in the semiconductor industry, but the focus is on new research to
adapt this technology to the inspection of MEMS devices, in particular to the characterization of motion elements such
as microcantilevers and deformable mirror arrays.
The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV)
catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms,
however, this process is to be replaced by an automated system. We previously presented work for localizing
near-surface veins via near-infrared (NIR) imaging in combination with structured light ranging for surface
mapping and robotic guidance. In this paper, we describe experiments to determine the best NIR wavelengths
to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm
or wrist surface. For illumination, we employ an array of NIR LEDs comprising six different wavelength centers
from 740nm to 910nm. We capture imagery of each subject under every possible combination of illuminants and
determine the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis.
In this paper, we investigate several improvements to region-based level set algorithms in the context of segmenting
x-ray CT data from pre-clinical imaging of small animal models. We incorporate a recently introduced
signed distance preserving term into a region-based level set model and provide formulas for a semi-implicit
finite difference implementation. We illustrate some pitfalls of topology preserving level sets and introduce the
concept of connectivity preservation as a potential alternative. We illustrate the benefits of these improvements
on phantom and real data.
We describe new image analysis developments in support of the U.S. Department of Energy's (DOE) Advanced
Gas Reactor (AGR) Fuel Development and Qualification Program. We previously reported a non-iterative,
Bayesian approach for locating the boundaries of different particle layers in cross-sectional imagery. That method,
however, had to be initialized by manual preprocessing where a user must select two points in each image, one
indicating the particle center and the other indicating the first layer interface. Here, we describe a technique
designed to eliminate the manual preprocessing and provide full automation. With a low resolution image, we
use "EdgeFlow" to approximate the layer boundaries with circular templates. Multiple snakes are initialized to
these circles and deformed using a greedy Bayesian strategy that incorporates coupling terms as well as a priori
information on the layer thicknesses and relative contrast. We show results indicating the effectiveness of the
proposed method.
KEYWORDS: Veins, Near infrared, 3D modeling, 3D image processing, Cameras, Skin, 3D acquisition, Light emitting diodes, Image processing, Structured light
Vein localization and catheter insertion constitute the first and perhaps most important phase of many medical procedures. Currently, catheterization is performed manually by trained personnel. This process can prove problematic, however, depending upon various physiological factors of the patient. We present in this paper initial work for localizing surface veins via near-infrared (NIR) imaging and structured light ranging. The eventual goal of the system is to serve as the guidance for a fully automatic (i.e., robotic) catheterization device. Our proposed system is based upon near-infrared (NIR) imaging, which has previously been shown effective in enhancing the visibility of surface veins. We locate the vein regions in the 2D NIR images using standard image processing techniques. We employ a NIR line-generating LED module to implement structured light ranging and construct a 3D topographic map of the arm surface. The located veins are mapped to the arm surface to provide a camera-registered representation of the arm and veins. We describe the techniques in detail and provide example imagery and 3D surface renderings.
We describe in this paper new developments in the characterization of coated particle nuclear fuel using optical microscopy and digital imaging. As in our previous work, we acquire optical imagery of the fuel pellets in two distinct manners that we refer to as shadow imaging and cross-sectional imaging. In shadow imaging, particles are collected in a single layer on an optically transparent dish and imaged using collimated back-lighting to measure outer surface characteristics only. In cross-sectional imaging, particles are mounted in acrylic epoxy and polished to near-center to reveal the inner coating layers for measurement. For shadow imaging, we describe a curvaturebased metric that is computed from the particle boundary points in the FFT domain using a low-frequency parametric representation. We also describe how missing boundary points are approximated using band-limited interpolation so that the FFT can be applied. For cross-section imaging, we describe a new Bayesian-motivated segmentation scheme as well as a new technique to correct layer measurements for the fact that we cannot observe the true mid-plane of the approximately spherical particles.
The mitotic spindle is a subcellular protein structure that facilitates chromosome segregation and is crucial to cell division. We describe an image processing approach to quantitatively characterize and compare mitotic spindles that have been imaged three dimensionally using confocal microscopy with fixed-cell preparations. The proposed approach is based on a set of features that are computed from each image stack representing a spindle. We compare several spindle datasets of varying biological (genotype) and/or environmental (drug treatment) conditions. The goal of this effort is to aid biologists in detecting differences between spindles that may not be apparent under subjective visual inspection, and furthermore, to eventually automate such analysis in high-throughput scenarios (thousands of images) where manual inspection would be unreasonable. Experimental results on positive- and negative-control data indicate that the proposed approach is indeed effective. Differences are detected when it is known they do exist (positive control) and no differences are detected when there are none (negative control). In two other experimental comparisons, results indicate structural spindle differences that biologists had not observed previously.
For process control, linewidth measurements are commonly performed on semiconductor wafers using top-down images from critical dimension measurement scanning electron microscopes (CD-SEMs). However, a measure of the line sidewall shape will be required as linewidths continue to shrink. Sidewall shape can be measured by physically cleaving the device and performing an SEM scan of the cross section, but this process is time consuming and results in destruction of product wafers. We develop a technique to estimate sidewall shape from top-down SEM images using pattern recognition based on historical cross section/top-down image pairs. Features are computed on subimages extracted from the top-down images. Several combinations of principal component analysis (PCA) and flavors of linear discriminant analysis (LDA) are employed to reduce the dimensionality of the feature vectors and maximize the spread between different sidewall shapes. Direct, weighted LDA (DW-LDA) results in a feature set that provides the best sidewall shape estimation. Experimental testing of the sidewall estimation system shows a root mean square error of approximately 1.8% of the linewidth, showing that this system is a viable method for estimating sidewall shape with little impact on the fabrication process (no new hardware and a minimal increase in process setup).
In this paper, we describe the inspection of coated particle nuclear fuel using optical microscopy. Each ideally spherical particle possesses four coating layers surrounding a fuel kernel. Kernels are designed with diameters of either 350 or 500 microns and the other four layers, from the kernel outward, are 100, 45, 35, and 45 microns, respectively. The inspection of the particles is undertaken in two phases. In the first phase, multiple particles are imaged via back-lighting in a single 3900 x 3090 image at a resolution of about 1.12 pixels/micron. The distance transform, watershed segmentation, edge detection, and the Kasa circle fitting algorithm are employed to compute total outer diameters only. In the second inspection phase, the particles are embedded in an epoxy and cleaved (via polishing) to reveal the cross-section structure of all layers simultaneously. These cleaved particles are imaged individually at a resolution of about 2.27 pixels/micron. We first find points on the kernel boundary and then employ the Kasa algorithm to estimate the overall particle center. We then find boundary points between the remaining layers along rays emanating from the particle center. Kernel and layer boundaries are detected using a novel segmentation approach. From these boundary points, we compute and store layer thickness data.
In semiconductor device manufacturing, critical dimension (CD) metrology provides a measurement for precise line-width control during the lithographic process. Currently scanning electron microscope (SEM) tools are typically used for this measurement, because the resolution requirements for the CD measurements are outside the range of optical microscopes. While CD has been a good feedback control for the lithographic process, line-widths continue to shrink and a more precise measurement of the printed lines is needed. With decreasing line widths, the entire sidewall structure must be monitored for precise process control. Sidewall structure is typically acquired by performing a destructive cross sectioning of the device, which is then imaged with a SEM tool. Since cross sectioning is destructive and slow, this is an undesirable method for testing product wafers and only a small sampling of the wafers can be tested. We have developed a technique in which historical cross section/top down image pairs are used to predict sidewall shape from top down SEM images. Features extracted from a new top down SEM image are used to locate similar top downs within the historical database and the corresponding cross sections in the database are combined to create a sidewall estimate for the new top down. Testing with field test data has shown the feasibility of this approach and that it will allow CD SEM tools to provide cross section estimates with no change in hardware or complex modeling.
Scanning electron microscope (SEM) images for semiconductor line-width measurements are generally acquired in a top-down configuration. As semiconductor dimensions continue to shrink, it has become increasingly important to characterize the cross-section, or sidewall, profiles. Cross-section imaging, however, requires the physical cleaving of the device, which is destructive and time-consuming. The goal of this work is to examine historical top-down and cross-section image pairs to determine if the cross-section profiles might be estimated by analyzing the corresponding top-down images. We present an empirical pattern recognition approach aimed at solving this problem. We compute feature vectors from sub-images of the top-down SEM images. Principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimensionality of the feature vectors, where class labels are assigned by clustering the cross-sections according to shape. Features are extracted from query top-downs and compared to the database. The estimated cross-section of the query is computed as a weighted combination of cross-sections corresponding to the nearest top-down neighbors. We report results obtained using 100nm, 180nm, and 250nm dense and isolated line data obtained by three different SEM tools.
We present modifications to a feature-based, image-retrieval approach for estimating semiconductor sidewall (cross-section) shapes using top-down images. The top-down images are acquired by a critical dimension scanning electron microscope (CD-SEM). The proposed system is based upon earlier work with several modifications. First, we use only line-edge, as opposed to full-line, sub-images from the top-down images. Secondly, Gabor filter features are introduced to replace some of the previously computed features. Finally, a new dimensionality reduction algorithm - direct, weighted linear discriminant analysis (DW-LDA) - is developed to replace the previous two-step principal component analysis plus LDA method. Results of the modified system are presented for data collected across several line widths, line spacings, and CD-SEM tools.
We describe an automated image processing approach for detecting and characterizing cavitation pits on stainless steel surfaces. The image sets to be examined have been captured by a scanning electron microscope (SEM). Each surface region is represented by a pair of SEM images, one captured before and one after the cavitation-causing process. Unfortunately, some required surface preparation steps between pre-cavitation and post-cavitation imaging can introduce artifacts and change image characteristics in such a way as to preclude simple image-to-image differencing. Furthermore, all of the images were manually captured and are subject to rotation and translation alignment errors as well as variations in focus and exposure. In the presented work, we first align the pre- and post- cavitation images using a Fourier-domain technique. Since pre-cavitation images can often contain artifacts that are very similar to pitting, we perform multi-scale pit detection on each pre- and post-cavitation image independently. Coincident regions labeled as pits in both pre- and post-cavitation images are discarded. Pit statistics are exported to a text file for further analysis. In this paper we provide background information, algorithmic details, and show some experimental results.
KEYWORDS: 3D modeling, Image segmentation, Image registration, 3D image processing, Image processing algorithms and systems, 3D image reconstruction, Data modeling, Solids, Systems modeling, Visual process modeling
This paper investigates the superquadrics-based object representation of complex scenes from range images. The issues on how the recover-and-select algorithm is incorporated to handle complex scenes containing background and multiple occluded objects are addressed respectively. For images containing backgrounds, the raw image is first coarsely segmented using the scan-line grouping technique. An area threshold is then taken to remove the backgrounds while keeping all the objects. After this pre-segmentation, the recover-and-select algorithm is applied to recover superquadric (SQ) models. For images containing multiple occluded objects, a circle-view strategy is taken to recover complete SQ models from range images in multiple views. First, a view path is planned as a circle around the objects, on which images are taken approximately every 45 degrees. Next, SQ models are recovered from each single-view range image. Finally, the SQ models from multiple views are registered and integrated. These approaches are tested on synthetic range images. Experimental results show that accurate and complete SQ models are recovered from complex scenes using our strategies. Moreover, the approach handling background problems is insensitive to the pre-segmentation error.
This paper presents an adaptive regularized image interpolation algorithm from blurred and noisy low resolution image sequence, which is developed in a general framework based on data fusion. This framework can preserve the high frequency components along the edge orientation in a restored high resolution image frame. This multiframe image interpolation algorithm is composed of two levels of fusion algorithm. One is to obtain enhanced low resolution images as an input data of the adaptive regularized image interpolation based on data fusion. The other one is to construct the adaptive fusion algorithm based on regularized image interpolation using steerable orientation analysis. In order to apply the regularization approach to the interpolation procedure, we first present an observation model of low resolution video formation system. Based on the observation model, we can have an interpolated image which minimizes both residual between the high resolution and the interpolated images with a prior constraints. In addition, by combining spatially adaptive constraints, directional high frequency components are preserved with efficiently suppressed noise. In the experimental results, interpolated images using the conventional algorithms are shown to compare the conventional algorithms with the proposed adaptive fusion based algorithm. Experimental results show that the proposed algorithm has the advantage of preserving directional high frequency components and suppressing undesirable artifacts such as noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.