Adaptive optics optical coherence tomography (AO-OCT) has allowed for the reliable 3-D imaging of individual retinal cells. The current AO-OCT systems are limited to tabletop implementation due to their size and complexity. This work describes the design and implementation of the first dual modality handheld AO-OCT (HAOOCT) and scanning laser ophthalmoscope (SLO) probe to extend AO-OCT imaging to previously excluded patients. Simultaneous SLO imaging allows for tracking of imaging features for HAOOCT localization. Pilot experiments on stabilized and recumbent adults using HAOOCT, weighing only 665 grams, revealed the 3-D photoreceptor structure for the first time using a handheld AO-OCT/SLO device.
Well-known limitations of optical coherence tomography (OCT) include deleterious speckle noise and relatively poor lateral resolution (typically >10 μm) due the tradeoff between lateral resolution and depth of focus. To address these limitations, we present 3D optical coherence refraction tomography (OCRT), which computationally combines 3D volumes from two rotational axes to form a 3D reconstruction with substantially reduced speckle noise and enhanced lateral resolution. Our approach features a parabolic mirror as the objective, which enables multi-view OCT volume acquisition over up to ±75° without moving the sample. We demonstrate 3D OCRT on a phantom sample and several biological samples, revealing new structures that are missed in conventional OCT.
The incorporation of adaptive optics (AO) technology into ophthalmic imaging systems has enhanced the understanding of retinal structure and function and the progression of various retinal diseases in adults by allowing for the dynamic correction of ocular and/or system aberrations. However, the in vivo visualization of important human retinal microanatomy, including cone photoreceptors, has been largely limited to fully cooperative subjects who are able to fixate and/or sit upright for extended imaging sessions in large tabletop AO systems. Previously, we developed the first handheld AO scanning laser ophthalmoscope capable of 2-D imaging of cone photoreceptors in supine adults and infants. In this work, we present the design and fabrication of the first handheld AO optical coherence tomography (HAOOCT) probe capable of collecting high-resolution volumetric images of the human retina. We designed custom optomechanics to build a spectral domain OCT system with a compact form factor of 22 cm × 18 cm × 5.2 cm and a total weight of 630 grams. The OCT imaging channel has a theoretical lateral resolution of 2.26 μm over a 1.0° × 1.0° field of view and an axial resolution of 4.01 μm. Stabilized imaging of healthy human adult volunteers revealed the 3-D photoreceptor structure and retinal pigment epithelium cells. HAOOCT was then deployed in handheld operation to image photoreceptors in upright and recumbent adults, indicating its potential to extend AO-OCT to previously excluded patient populations.
We present results from depth-resolved light scattering measurements of triple transgenic mouse retinas for Alzheimer’s Disease (AD) using a multimodal coherent imaging system. Use of a co-registered angle-resolved low-coherence interferometry (a/LCI) and optical coherence tomography (OCT) system allows unique analysis that is otherwise unavailable using a single modality to provide complementary information on tissue structural changes associated with AD. This abstract summarizes the light scattering parameters drawn using this system at selective retinal layers guided by OCT image segmentation. Future developments of this combined system for human retinal imaging, which involve a low-cost OCT engine, are also discussed.
Quantitative features of individual ganglion cells (GCs) are potential paradigm changing biomarkers for improved diagnosis and treatment monitoring of GC loss in neurodegenerative diseases like glaucoma and Alzheimer’s disease. The recent incorporation of adaptive optics (AO) with extremely fast and high-resolution optical coherence tomography (OCT) allows visualization of GC layer (GCL) somas in volumetric scans of the living human eye. The current standard approach for quantification – manual marking of AO-OCT volumes – is subjective, time consuming, and not practical for large scale studies. Thus, there is a need to develop an automatic technique for rapid, high throughput, and objective quantification of GC morphological properties. In this work, we present the first fully automatic method for counting and measuring GCL soma diameter in AO-OCT volumes. Aside from novelty in application, our proposed deep learningbased algorithm is novel with respect to network architecture. Also, previous deep learning OCT segmentation algorithms used pixel-level annotation masks for supervised learning. Instead in this work, we use weakly supervised training, which requires significantly less human input in curating the training set for the deep learning algorithm, as our training data is only associated with coarse-grained labels. Our automatic method achieved a high level of accuracy in counting GCL somas, which was on par with human performance yet orders of magnitude faster. Moreover, our automatic method’s measure of soma diameters was in line with previous histological and in vivo semi-automatic measurement studies. These results suggest that our algorithm may eventually replace the costly and time-consuming manual marking process in future studies.
Adaptive optics scanning laser ophthalmoscopy (AOSLO) has advanced the study of retinal structure and function by enabling in vivo imaging of individual photoreceptors. Most implementations of AOSLOs are large, complex tabletop systems, thereby preventing high quality photoreceptor imaging of patients who are unable to sit upright and/or fixate for an imaging session. We have previously addressed this limitation in the clinical translation of AOSLO by developing the first confocal handheld AOSLO (HAOSLO) capable of cone photoreceptor visualization in adults and infants. However, confocal AOSLO images suffer from imaging artifacts and the inability to detect remnant cone structure, leading to ambiguous or potentially misleading results. Recently, it has been shown that non-confocal split-detection (SD) AOSLO images, created by the collection of multiply backscattered light, enables more reliable studies of retinal photoreceptors by providing images of the cone inner segment. In this paper, we detail the extension of our HAOSLO probe to enable multi-channel light collection resulting in the first ever multimodal handheld AOSLO (M-HAOSLO). Imaging sessions were conducted on two dilated, healthy human adult volunteers, and M-HAOSLO images taken in handheld operation mode reveal the cone photoreceptor mosaic. Aside from being the first miniaturized and portable implementation of a SD AOSLO system, M-HAOSLO relies on sensorless optimization of the wavefront to correct aberrations. Thus, we also show the first ever SD images collected after correction of the eye’s estimated wavefront.
Conventional scanning laser ophthalmoscopy (SLO) utilizes a finite collection pinhole at a retinal conjugate plane to strongly reject out-of-focus light while primarily transmitting the in-focus, retinal backscattered signal. However, to improve lateral resolution, a sub-Airy disk collection pinhole is necessary, which drastically reduces the signal-to-noise ratio (SNR) of the system and is thus not commonly employed. Recently, an all-optical, super-resolution microscopy technique known as optical photon reassignment (OPRA) microscopy (also known as re-scan confocal microscopy) has been developed to bypass this fundamental tradeoff between resolution and SNR in confocal microscopy. We present a methodology and system design for obtaining super resolution in retinal imaging by combining the concepts of SLO and OPRA microscopy. The resolution improvement of the system was quantified using a 1951 USAF target at a telecentric intermediate image plane. Retinal images from human volunteers were acquired with this system both with and without using the OPRA technique to demonstrate the resolution improvement when imaging parafoveal cone photoreceptors. Finally, we quantified the resolution improvement in the retina by analyzing the radially averaged power spectrum of the retinal images.
Handheld optical coherence tomography (OCT) systems facilitate imaging of young children, bedridden subjects, and those with less stable fixation. Smaller and lighter OCT probes allow for more efficient imaging and reduced operator fatigue, which is critical for prolonged use in either the operating room or neonatal intensive care unit. In addition to size and weight, the imaging speed, image quality, field of view, resolution, and focus correction capability are critical parameters that determine the clinical utility of a handheld probe. Here, we describe an ultra-compact swept source (SS) OCT handheld probe weighing only 211 g (half the weight of the next lightest handheld SSOCT probe in the literature) with 20.1 µm lateral resolution, 7 µm axial resolution, 102 dB peak sensitivity, a 27° x 23° field of view, and motorized focus adjustment for refraction correction between -10 to +16 D. A 2D microelectromechanical systems (MEMS) scanner, a converging beam-at-scanner telescope configuration, and an optical design employing 6 different custom optics were used to minimize device size and weight while achieving diffraction limited performance throughout the system’s field of view. Custom graphics processing unit (GPU)-accelerated software was used to provide real-time display of OCT B-scans and volumes. Retinal images were acquired from adult volunteers to demonstrate imaging performance.
We introduce a metric in graph search and demonstrate its application for segmenting retinal optical coherence tomography (OCT) images of macular pathology. Our proposed “adjusted mean arc length” (AMAL) metric is an adaptation of the lowest mean arc length search technique for automated OCT segmentation. We compare this method to Dijkstra’s shortest path algorithm, which we utilized previously in our popular graph theory and dynamic programming segmentation technique. As an illustrative example, we show that AMAL-based length-adaptive segmentation outperforms the shortest path in delineating the retina/vitreous boundary of patients with full-thickness macular holes when compared with expert manual grading.
Inducing angiogenesis is one hallmark of cancer. Tumor induced neovasculature is often characterized as leaky, tortuous and chaotic, unlike a highly organized normal vasculature. Additionally, in the course of carcinogenesis, angiogenesis precedes a visible lesion. Tumor cannot grow beyond 1-2 mm in diameter without inducing angiogenesis. Therefore, capturing the event of angiogenesis may aid early detection of pre-cancer –important for better treatment prognoses in regions that lack the resources to manage invasive cancer.
In this study, we imaged the neovascularization in vivo in a spontaneous hamster cheek pouch carcinogen model using a, non-invasive, label-free, high resolution, reflected-light spectral darkfield microscope. Hamsters’ cheek pouches were painted with 7,12-Dimethylbenz[a]anthracene (DMBA) to induce pre-cancerous to cancerous changes, or mineral oil as control. High resolution spectral darkfield images were obtained over the course of pre-cancer development and in control cheek pouches. The vasculature was segmented with a multi-scale Gabor filter with an 85% accuracy compared with manually traced masks. Highly tortuous vasculature was observed only in the DMBA treated cheek pouches as early as 6 weeks of treatment. In addition, the highly tortuous vessels could be identified before a visible lesion occurred later during the treatment. The vessel patterns as determined by the tortuosity index were significantly different from that of the control cheek pouch. This preliminary study suggests that high-resolution darkfield microscopy is promising tool for pre-cancer and early cancer detection in low resource settings.
The human retina is composed of several layers, visible by in vivo optical coherence tomography (OCT) imaging. To enhance diagnostics of retinal diseases, several algorithms have been developed to automatically segment one or more of the boundaries of these layers. OCT images are corrupted by noise, which is frequently the result of the detector noise and speckle, a type of coherent noise resulting from the presence of several scatterers in each voxel. However, it is unknown what the empirical distribution of noise in each layer of the retina is, and how the magnitude and distribution of the noise affects the lower bounds of segmentation accuracy. Five healthy volunteers were imaged using a spectral domain OCT probe from Bioptigen, Inc, centered at 850nm with 4.6µm full width at half maximum axial resolution. Each volume was segmented by expert manual graders into nine layers. The histograms of intensities in each layer were then fit to seven possible noise distributions from the literature on speckle and image processing. Using these empirical noise distributions and empirical estimates of the intensity of each layer, the Cramer-Rao lower bound (CRLB), a measure of the variance of an estimator, was calculated for each boundary layer. Additionally, the optimum bias of a segmentation algorithm was calculated, and a corresponding biased CRLB was calculated, which represents the improved performance an algorithm can achieve by using prior knowledge, such as the smoothness and continuity of layer boundaries. Our general mathematical model can be easily adapted for virtually any OCT modality.
Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.
In vivo photoreceptor imaging has enhanced the way vision scientists and ophthalmologists understand the retinal structure, function, and etiology of numerous retinal pathologies. However, the complexity and large footprint of current systems capable of resolving photoreceptors has limited imaging to patients who are able to sit in an upright position and fixate for several minutes. Unfortunately, this excludes an important fraction of patients including bedridden patients, small children, and infants. Here, we show that our dual-modality, high-resolution handheld probe with a weight of only 94 g is capable of visualizing photoreceptors in supine children. Our device utilizes a microelectromechanical systems (MEMS) scanner and a novel telescope design to achieve over an order of magnitude reduction in size compared to similar systems. The probe has a 7° field of view and a lateral resolution of 8 µm. The optical coherence tomography (OCT) system has an axial resolution of 7 µm and a sensitivity of 101 dB. High definition scanning laser ophthalmoscopy (SLO) and OCT images were acquired from children ranging from 14 months to 12 years of age with and without pathology during examination under anesthesia in the operating room. Parafoveal cone imaging was shown using the SLO arm of this device without adaptive optics using a 3° FOV for the first time in children under 4 years old. This work lays the foundation for pediatric research, which will improve understanding of retinal development, maldevelopment and early onset of diseases at the cellular level during the beginning stages of human growth.
Patient motion artifacts are an important source of data irregularities in OCT imaging. With longer duration OCT scans – as is needed for large wide field of view scans or increased scan density – motion artifacts become increasingly problematic. Strategies to mitigate these motion artifacts are then necessary to ensure OCT data integrity. A popular strategy for reducing motion artifacts in OCT images is to capture two orthogonally oriented volumetric scans containing uncorrelated motion and subsequently reconstructing a motion-free volume by combining information from both datasets. While many different variations of this registration approach have been proposed, even the most recent methods might not be suitable for wide FOV OCT scans which can be lacking in features away from the optic nerve head or arcades. To address this problem, we propose a two-stage motion correction algorithm for wide FOV OCT volumes. In the first step, X and Y axes motion is corrected by registering OCT summed voxel projections (SVPs). To achieve this, we introduce a method based on a custom variation of the dense optical flow technique which is aware of the motion free orientation of the scan. Secondly, a depth (Z axis) correction approach based on the segmentation of the retinal layer boundaries in each B-scan using graph-theory and dynamic programming is applied. This motion correction method was applied to wide field retinal OCT volumes (approximately 80° FOV) of 3 subjects with substantial reduction in motion artifacts.
Handheld scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) systems facilitate imaging of young children and subjects that have difficulty fixating. More compact and lightweight probes allow for better portability and increased comfort for the operator of the handheld probe. We describe a very compact, novel SLO and OCT handheld probe design. A single 2D microelectromechanical systems (MEMS) scanner and a custom optical design using a converging beam prior to the scanner permitted significant reduction in the system size. Our design utilized a combination of commercial and custom optics that were optimized in Zemax to achieve near diffraction-limited resolution of 8 μm over a 7° field of view. The handheld probe has a form factor of 7 x 6 x 2.5 cm and a weight of only 94 g, which is over an order of magnitude lighter than prior SLO-OCT handheld probes. Images were acquired from a normal subject with an incident power on the eye under the ANSI limit. With this device, which is the world’s lightest and smallest SLO-OCT system, we were able to visualize parafoveal cone photoreceptors and nerve fiber bundles without the use of adaptive optics.
We assessed the reproducibility of lateral and axial measurements performed with spectral-domain optical coherence tomography (SDOCT) instruments from a single manufacturer and across several manufacturers. One human retina phantom was imaged on two instruments each from four SDOCT platforms: Zeiss Cirrus, Heidelberg Spectralis, Bioptigen SDOIS, and hand-held Bioptigen Envisu. Built-in software calipers were used to perform manual measurements of a fixed lateral width (LW), central foveal thickness (CFT), and parafoveal thickness (PFT) 1 mm from foveal center. Inter- and intraplatform reproducibilities were assessed with analysis of variance and Tukey-Kramer tests. The range of measurements between platforms was 5171 to 5290 μm for mean LW (p<0.001), 162 to 196 μm for mean CFT (p<0.001), and 267 to 316 μm for mean PFT (p<0.001). All SDOCT platforms had significant differences between each other for all measurements, except LW between Bioptigen SDOIS and Envisu (p=0.27). Intraplatform differences were significantly smaller than interplatform differences for LW (p=0.020), CFT (p=0.045), and PFT (p=0.004). Conversion factors were generated for lateral and axial scaling between SDOCT platforms. Lateral and axial manual measurements have greater variance across different SDOCT platforms than between instruments from the same platform. Conversion factors for measurements from different platforms can produce normalized values for patient care and clinical studies.
Confocal scanning laser ophthalmoscopy (cSLO) enables high-resolution and high-contrast imaging of the retina by employing spatial filtering for scattered light rejection. However, to obtain optimized image quality, one must design the cSLO around scanner technology limitations and minimize the effects of ocular aberrations and imaging artifacts. We describe a cSLO design methodology resulting in a simple, relatively inexpensive, and compact lens-based cSLO design optimized to balance resolution and throughput for a 20-deg field of view (FOV) with minimal imaging artifacts. We tested the imaging capabilities of our cSLO design with an experimental setup from which we obtained fast and high signal-to-noise ratio (SNR) retinal images. At lower FOVs, we were able to visualize parafoveal cone photoreceptors and nerve fiber bundles even without the use of adaptive optics. Through an experiment comparing our optimized cSLO design to a commercial cSLO system, we show that our design demonstrates a significant improvement in both image quality and resolution.
We describe an efficient approach for the automated segmentation of pathological/morphological structures in
ophthalmic Spectral Domain Optical Coherence Tomography (SDOCT) images. In this algorithm, image pixels
are treated as nodes of a graph with edge weights assigned to associate pairs of pixels. The weights vary according
to the distances, brightness differences, and feature variations between pixel pairs. Cuts through the graph with
minimum accumulated weights correspond to morphological layer boundaries. This approach has been applied
to SDOCT images with encouraging results and thus forms an adaptable framework for the segmentation of
many different ophthalmic structures.
Accurate detection-characterization of drusen is an important imaging biomarker of age-related macular degeneration (AMD) progression. We report on the development of an automatic method for detection and segmentation of drusen in retinal images captured via high speed spectral domain coherence tomography (SDOCT) systems. The proposed algorithm takes advantage of a priori knowledge about the retina shape and structure in the AMD and normal eyes. In the first step, the location of the retinal nerve fiber layer (RNFL) is estimated by searching for the locally connected segments with high radiometric vertical gradients appearing in the upper section of the SDOCT scans. The high reflective and locally connected pixels that are spatially located below the RNFL layer are taken as the initial estimate of the retinal pigment epithelium (RPE) layer location. Such rough estimates are smoothed and improved by using a slightly modified implementation of the Xu-Prince gradient vector flow based deformable snake method. Further steps, including a two-pass scan of the image, remove outliers and improve the accuracy of the estimates. Unlike healthy eyes commonly exhibiting a convex RPE shape, the shape of the RPE layer in AMD eyes might include abnormalities due to the presence of drusen. Therefore by enforcing local convexity condition and fitting second or fourth order polynomials to the possibly unhealthy (abnormal) RPE curve, the health (normal) shape of the RPE layer is estimated. The area between the estimated normal and the segmented RPE outlines is marked as possible drusen location. Moreover, fine-tuning steps are incorporated to improve the accuracy of the proposed technique. All methods are implemented in a graphical user interface (GUI) software package based on MATLAB platform. Minor errors in estimating drusen volume can be easily manually corrected using the user-friendly software interface and the program is constantly refined to correct for the repeating errors. This semi-supervised approach significantly reduces the time and resources needed to conduct a large-scale AMD study. The computational complexity of the core automated segmentation technique is attractive as it only takes about 6.5 seconds on a conventional PC to segment, display, and record drusen locations in an image of size (512 × 1000) pixels. Experimental results on segmenting drusen in SDOCT images of different subjects are included, which attest to the effectiveness of the proposed technique.
KEYWORDS: Image filtering, Smoothing, Signal to noise ratio, Digital filtering, Denoising, Image restoration, Monte Carlo methods, Image fusion, Hassium, Electronic filtering
Bilateral filtering1, 2 has proven to be a powerful tool for adaptive denoising purposes. Unlike conventional filters,
the bilateral filter defines the closeness of two pixels not only based on geometric distance but also based on
radiometric (graylevel) distance. In this paper, to further improve the performance and find new applications,
we make contact with a classic non-parametric image reconstruction technique called kernel regression,3 which
is based on local Taylor expansions of the regression function. We extend and generalize the kernel regression
method and show that bilateral filtering is a special case of this new class of adaptive image reconstruction
techniques, considering a specific choice for weighting kernels and zeroth order Taylor approximation. We show
improvements over the classic bilateral filtering can be achieved by using higher order local approximations of
the signal.
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR,
although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues related to designing a practical SR system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust SR method applicable to images from different imaging systems. We study a general framework for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that the motion estimation is often considered a bottleneck in terms of SR performance, we introduce the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter (KF) and adopt a dynamic point of view to the SR problem. Novel methods for addressing these issues are accompanied by experimental results on real data.
In the last two decades a variety of super-resolution (SR) methods have been proposed. These methods usually address the problem of fusing a set of monochromatic images to produce a single monochromatic image with higher spatial resolution. In this paper we address the dynamic and color SR problems of reconstructing a high-quality set of colored super-resolved images from low-quality mosaiced frames. Our approach includes a hybrid method for simultaneous SR and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter. Experimental results on both simulated and real data are supplied, demonstrating the presented algorithm, and its strength.
In the last two decades, two related categories of problems have been studied independently in the image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and as conventional color digital cameras suffer from both low-spatial resolution and color filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori (MAP) estimation technique by minimizing a multi-term cost function. The L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges
and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance component.
Finally, an additional regularization term is used to force similar edge orientation in different color channels.
We show that the minimization of the total cost function is relatively easy and fast. Experimental results on
synthetic and real data sets confirm the effectiveness of our method.
In the last two decades, many papers have been published, proposing a variety of methods for multi-frame resolution enhancement. These methods, which have a wide range of complexity, memory and time requirements, are usually very sensitive to their assumed model of data and noise, often limiting their utility. Different
implementations of the non-iterative Shift and Add concept have been proposed as very fast and effective super-resolution algorithms. The paper of Elad & Hel-Or 2001 provided an adequate mathematical justification for the Shift and Add method for the simple case of an additive Gaussian noise model. In this paper we prove that additive Gaussian distribution is not a proper model for super-resolution noise. Specifically, we show that Lp norm minimization (1≤p≤2) results in a pixelwise weighted mean algorithm which requires the least possible amount of computation time and memory and produces a maximum likelihood solution. We also justify the use of a robust prior information term based on bilateral filter idea. Finally, for the underdetermined case, where the number of non-redundant low-resolution frames are less than square of the resolution enhancement factor, we propose a method for detection and removal of outlier pixels. Our experiments using commercialdigital cameras show that our proposed super-resolution method provides significant improvements in both accuracy and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.