PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The performance of shift-invariant distance classifiers based on correlation filters is evaluated. First, the effect of noise on a classifier designed to recognize synthetic aperture radar (SAR) is observed. Then, a 2-class ATR designed to recognize infrared images of actual targets is evaluated. The results attest to the ability of the distance classifier to tolerate distortions, and recognize targets in the presence of noise and clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and are building an acousto-optic, one- dimensional, space-integrating wide band optical correlator that is interfaced to a high-performance parallel computer (HPC) signal processing system through multiple 100 MB/s High Performance Parallel Interface (HiPPI) channels. The optical correlator system provides a high-speed matched filter 'engine' for the HPC, producing a maximum sustained computation rate equivalent to a digital processor operating at more than 15 GFLOPS, but with significantly lower power and volume requirements. The system is intended to demonstrate the synergistic capabilities of a hybrid digital-optical approach. The application is a matched-filter detection system that must perform real-time crosscorrelations between return signals and hundreds of time-warped (dilated or compressed) versions of a transmitted signal. The system uses a layered hardware approach consisting of an optical kernel, a high-speed custom electronic interface layer that keeps the optical kernel operating at peak efficiency, and a VME-based outer layer that uses commercial hardware to provide the main control and interface framework. The system includes multiple HiPPI channels that may be added in a modular fashion, two 12-bit, 200 MSPS arbitrary waveform generators, a 200 MSPS A/D converter subsystem, a custom quadrature demodulation post-processor, a direct digital synthesis clock and control subsystem, automatic gain control electronics, a system controller, and a Sun-based development and monitoring system. The designed system output bandwidth is 128 MB/s, although the optical engine is capable of greater throughput.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optimum receiver is designed, based on multiple alternative hypothesis testing, for pattern recognition with nonoverlapping target and scene noise. The designed system can detect a noise- free target(s) with unknown illumination(s) in any nonoverlapping scene noise with no error. A hybrid electro-optical processor is introduced to implement the optimum receiver. Computer simulation results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images obtained from ground-based telescopes are distorted due to the effects of atmospheric turbulence. This disturbance can be compensated for by employing adaptive optics (predetection compensation), image reconstruction techniques (postdetection compensation), or a combination of both (hybrid compensation). This study derives analytic expressions for the residual mean squared error of each technique. These mean squared error expressions are then used to parametrically evaluate the performance of the compensated imaging techniques under varying conditions. Parameters of interest include actuator spacing, coherence length, and wavefront sensor noise variance. It is shown that hybrid imaging allows for the design of lower cost systems (fewer actuators) that still provide good correction. The adaptive optics system modeled includes a continuous faceplate deformable mirror and a Hartmann Shack wavefront sensor. The linear image reconstruction technique modeled is deconvolution via inverse filtering. The hybrid system employs the adaptive optics for first order correction and the image reconstruction for higher order correction. This approach is not limited to correction of atmospheric turbulence degraded images. It can be applied to other disturbances, such as space platform jitter, as long as the corresponding structure function can be estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid character recognition system is described. It is composed to two parts, the feature extractor and the inner- product correlator. Ten Arabic characters from 0 to 9 are tested for their recognition by the system. The experimental results show that all the printing characters are perfectly recognized with rotational invariance of 15 degree(s) angle, and the recognized ratio for hand-writing characters is over sixty percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This presentation describes an on-line image analysis system for the automatic distribution analysis of holographic particles in 3D space. To obtain the 3D distribution parameters of particles, sequences of 2D cross-sectional retrieved images of the particle hologram are obtained using the in-line retrieval method, and the processing of the 2D retrieved images is discussed in this presentation. To segment the candidate particles, an entropy based automatic threshold selection method is adopted. In the process of out-of-focus particle removal, the radial intensity profile of the candidates in the original image and the clearness of the candidate neighboring areas in the Sobeled image are analyzed. Experimental results are presented to show the efficiency of the approach described in this presentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interferometric images are very important for testing, measurement, and inspection applications. With the development of computer technology, automatic analysis of interferometric fringes is required. The first step for this is the extraction of fringes from the image, to distinguish the fringe information from backgrounds. Algorithms are described in the literature that can process some kinds of simple interferometric images, but need man-machine interactive operations to process questionable points-fringe interrupted points and false fringes caused by noises-for complex images such as holographic interferometric images. This paper discusses the technique for automatic processing of these questionable points without any man-machine interactive operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on mathematical morphology and digital umbra shading and shadowing algorithm, a new scheme for realizing the fundamental morphological operation of one dimensional gray images is proposed. The mathematical formula for the parallel processing of 1D gray images is summarized; some important conclusions of morphological processing from binary images to gray images are obtained. The advantages of this scheme is simple in structure, high resolution in gray level, and good in parallelism. It can raise the speed of performing morphological processing of gray images greatly and obtain more accurate results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, several image compression theories are unified. These theories are: Hilbert image compression, Fractal image compression, image compression using Boltzmann machines, using stochastic artificial neural network (SANN), using stochastic cellular automata (SCA) and 0-image encoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, which is the first of a series, we discuss the optical implementation of signal and image compression via block encoding, runlength encoding, and transform coding. In Part 2 of this series, we propose architectures for data encryption via mono- and poly-alphabetic substitutions, transposition, polygraphic ciphers, vector quantization, and DES (data encryption standard). Analyses emphasize computational cost inclusive of propagation time, as well as a discussion of the information loss expected from physical devices such as spatial light modulators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to a reduced space requirement, the output of compressive transformations can generally be processed with fewer operations than are required to process uncompressed data. Since 1992, we have published theory that unifies the processing compressed and/or encrypted imagery, and have demonstrated significant computational speedup in the case of compressive processing, In this paper, the third of a series, we extend our previously reported work in optical processor design based on image algebra to include the design of optical processors that compute over compressed data. Parts 1 and 2 describe optical architectures that are designed to produce compressed or encrypted imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Besides the advantage in statistical efficiency, coding images in certain transform domain also facilitates the adoption of the sensitivity function of the human visual systems (HVS). This involves establishing in that transform domain a subjective distortion criterion in the form of weighted mean square error. In this paper, after a brief review over some existing methods for that purpose, the method used in a recent study is described. Through a special arrangement with DCT coding, it is examined in parallel with the method by Katto et al. The rate-distortion relation from the coding process indicates that the two methods are robust and close in efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Segmented Image Coding (SIC), an image is segmented into several homogeneous regions. In each of the regions, the image intensity function is represented by a weighted sum of orthogonal base functions. In existing SIC-methods, the bases are generated by Gram-Schmidt Orthogonalization of a nonorthogonal polynomial (or sometimes sinusoidal) starting base. Unfortunately, the orthogonal base for a segment depends on the segment's shape and must be recomputed for every region. Therefore, the computational demands of SIC are much higher than those of block coding. This paper shows that when the starting base is orthogonalized in a different order, the resulting orthogonal base functions are the product of two component functions. This property is called weak separability. In the weakly separable case, only the component functions appear in computations, which implies that it is not necessary to explicitly evaluate the orthogonal bases. The major advantage of the new approach is that the component functions can be generated quickly using three-term recurrences and that little memory is needed for storing them. Consequently the (components of) the new bases can be computed much faster than the classical bases (typically 10-25 times) and using less memory. This is true, even though both methods produce images of the same subjective quality. The paper also shows that a wide variety of other orthogonal bases is obtained by considering more general starting bases. The spatial properties of the corresponding base functions are described qualitatively; they are determined by two parameter functions and can be modified by appropriately selecting these parameter functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A continuously-evolving, nonstationary time signal is essentially impossible to analyze digitally. To analyze it by analog optics, we suggest an acoustooptic wavelet transform continuous in separation time, clock time, and frequency. For convenience, we cause the pattern for any signal feature to remain spatially fixed while that feature is in the time window.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart zooming refers to certain digital image processing algorithms that enable the examination of detail ordinarily obscured by pixelation effects. These algorithms use radial basis function interpolation to smooth image blockiness due, for example, to magnification by pixel replication. They may permit more smoothing flexibility while retaining more image detail than conventional convolution smoothing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to its iterative nature, the execution of the maximum likelihood expectation maximization (ML-EM) reconstruction algorithm requires a long computation time. To overcome this problem, multiprocessor machines could be used. In this paper, a parallel implementation of the algorithm for positron emission tomography (PET) images is presented. To cope with the difficulties involved with parallel programming a programming environment based on a visual language has been used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through analyzing and researching characteristics of structure and image formation of Dove prism, this paper presents a discal image rotator with large aperture and small size. The discal image rotator can compensate image-lean in optical system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of discriminating handwritten from machine-printed text is important for character recognition applications because most recognition algorithms for handwritten text differ considerably from those for machine-printed text. Therefore, an efficient segregation of the two streams is necessary prior to recognition in order to minimize systems cost and complexity. Several techniques have been proposed based on character connectivity and heuristics; but very few achieve results at the 99% level. The technique described in this paper has been proven to yield performance figures in the high 99% on tens of thousands of IRS tax forms and postal envelopes. The technique proposed is based on the use of density of black to white for a given binary field and the overall density of pixels for a gray-scale field as a main discrimination feature. First, the given field is boxed very closely and its boundaries are isolated in space. A horizontal histogram is extracted for this field, and the total number of black pixels is computed. The amount of black pixels per unit area is generated for binary text, and the sum of all pixels is generated for gray-level text. When tested on a large number of samples, these densities cluster following distinct normal distributions for handwritten and machine-printed text respectively. Fuzzy thresholds are set where the two normal curves cross with a confidence interval of 99%. The samples whose densities fall below the threshold are considered handwritten and the samples whose densities fall above the threshold are considered machine-printed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background normalization is a low-level image processing task typically used to enhance images by eliminating featureless, nonuniform background illumination. Automatic background normalization requires three distinct steps: threshold selection and segmentation, reconstruction of background image, and subtraction. This paper presents a new region-based thresholding criterion for background identification and normalization. Experimental results will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A typical method of removing a periodic interference from an image is to two-dimensional fast Fourier transform (FFT) the interference corrupted image and manually locate and remove the impulses due to the interference. The last step is to inverse fast Fourier transform the interference free spectrum to yield the restored image. This is a manual process usually requiring human interaction and several iterations. In this paper, an adaptive median filter applied to an interference corrupted spectrum is presented to automatically remove only the interference spectral components while leaving the spectral components of the interference free image unmodified. The result is an interference free image obtained automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edges are considered to possess the most valuable information of an image. Extracting image edges has been an essential step in many computer vision processes. Intensity gradient has been widely used as a tool to determine the existence of step or ramp edges. Thresholding can be applied to extract places where intensity gradient is high. One commonly encountered problem with this approach is that sometimes it is difficult to find an effective threshold value. In this paper we describe a growing process to extract image edges. Our goal is to extract fairly complete edges without many spurious segments. We first detect places where there is a clear indication that an edge exists with a higher gradient threshold value. These places are used as seed edges. These initial seed edges might not be complete. We then lower the gradient threshold to expand the initial edge map to include more and more weaker edge parts that are connected to what has already been extracted. In other words, weaker edges grow out from the seed edge map. Random intensity variations over homogeneous areas, because they are not connected to the seed edges, though their gradient values might be higher than the lower threshold, will not be extracted. We have also developed and utilized a pixel gradient ranking method so the resulting edges are single pixel thick and the extracted edges will locate at places where the intensity change is locally the highest. The approach has been tested on a number of real images and is found to be effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An extension of one of the fastest existing algorithms for the computation of the 2D discrete cosine transform is given. The algorithm can be implemented in-place requiring N2 less memory locations and 2N2 less data transfers for the computation of NXN DCT points compared to existing 2D FCT algorithms. Based on the proposed algorithm, a fast pruning algorithm is derived for computing the N0xN0 lowest frequency components of a length NXN discrete cosine transform, with both N and N0 being powers of 2. The computational complexity of the algorithm is compared with the row-column pruning method and experimental results on execution times are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that targets moving along track within a Synthetic Aperture Radar (SAR) field of view are imaged as defocused objects. The SAR stripmap mode is tuned to stationary ground targets and the mismatch between the SAR processing parameters and the target motion parameters causes the energy to spill over to adjacent image pixels, thus not only hindering target feature extraction, but also reducing the probability of detection. The problem can be remedied by generating the image using a filter matched to the actual target motion parameters, effectively focusing the SAR image on the target. For a fixed rate of motion the target velocity can be estimated from the slope of the Doppler frequency characteristic. The processing is carried out on the range compressed data but before azimuth compression. The problem is similar to the classical problem of estimating the instantaneous frequency of a linear FM signal (chirp). This paper investigates the application of three different time-frequency analysis techniques to estimate the instantaneous Doppler frequency of range compressed SAR data. In particular, we compare the Wigner-Ville distribution, the Gabor expansion and the Short-Time Fourier transform with respect to their performance in noisy SAR data. Criteria are suggested to quantify the performance of each method in the joint time- frequency domain. It is shown that these methods exhibit sharp signal-to-noise threshold effects, i.e., a certain SNR below which the accuracy of the velocity estimation deteriorates rapidly. It is also shown that the methods differ with respect to their representation of the SAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the operational requirements placed on airborne surveillance systems increase many in-service systems are approaching the limits of their performance. This problem is compounded by modifications to the systems, such as the change of displays and addition of recording equipment which causes the systems not to be ergonomically and psychophysiologically optimized. Simulation has been conducted and sensor system modelling performed to assess the improvement in operational performance that is attainable by the application of image processing techniques viable for implementation in the real-time airborne environment. Equipments have been developed and operated in-service from which operational reports have confirmed the results of the system modelling. Future upgrades of the processing system will enable the application of such processing to a range of in-service systems as an intermediate, low cost system upgrade.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decomposability of convex grayscale structural functions on 1D digital space Z will be discussed. Based on this, we will consider how to decompose 1D digital structural functions, especially those functions taking integer values, into bipoint functions or into locally defined functions (i.e. functions with small sized support domains). It will be shown that any real valued convex function on Z must be able to be decomposed and some efficient decomposing algorithms will be offered. For integral valued convex function on Z, although most of them may be indecomposable, we will show that after changing them a little an (approximate) decomposition can be found efficiently. A simple technique will be presented to remove the distortion of morphological transformation caused by using this approximate decomposition. Finally, a brief discussion will be given to decomposition of nonconvex structural function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for threshold selecting based on histogram transformation is proposed. New formulae for histogram transformation are introduced in the paper. After doing histogram transformation, the histogram became smooth. So, the optimal threshold is easy to detect. It is computationally fast and efficient. Experimental results for a set of images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system employing three acousto-optic cells and a small laser has been constructed which transforms a stereo image, provided by two identical cameras, into a range sensitive translation of an optical beam.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this second of a series of papers, we present optical architectures for the encryption of signals or imagery that are defined over discrete domains. As in Part 1, we emphasize the mapping of candidate algorithms and describe encryption transforms in terms of image algebra (IA) expressions, as well as notation specific to this study. both of which were reviewed in Part 1. The feasibility of our algorithms is verified by presenting schematic optical architectures that implement the corresponding IA expressions, together with pertinent analyses. In particular, we discuss the optical implementation of data encryption via mono- and polyalphabetic substitutions, transpositional and polygraphic ciphers, vector quantization, and DES (data encryption standard). Analyses and discussion emphasize computation cost inclusive of propagation time, as well as the information loss expected from physical devices such as spatial light modulators and beam deflectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.