PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, we present an implementation of the IDEA algorithm for image encryption. The image encryption is incorporated into the compression algorithm for transmission over a data network. In the proposed method, Embedded Wavelet Zero-tree Coding is used for image compression. Experimental results show that our proposed scheme enhances data security and reduces the network bandwidth required for video transmissions. A software implementation and system architecture for hardware implementation of the IDEA image encryption algorithm based on Field Programmable Gate Array (FPGA) technology are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Secure transmission of information is an important aspect of modern telecommunication systems. Data encryption is applied in several contexts, whenever privacy is a fundamental aspect, e.g., in modern mobile networks. In this work, a stream cipher based on discrete time nonlinear dynamic systems is proposed. The Henon's map is used to generate a chaotic signal. Its samples are quantized and processed to produce a sequence of data as much uncorrelated as possible. The proposed scheme demonstrates high sensitivity to the parameters of the map as well as to initial conditions. The resulting binary sequence is used to mask the stream of information bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extension of gray level watermarking to the color case is one of the open issues watermarking researchers are still facing with. To get rid of the correlation among image color bands, a new approach is proposed in this paper which is based on the decorrelation property of the Karhunen-Loeve Transform (KLT). First, the KLT is applied to the RGB components of the host image, then watermarking is performed independently in the DFT domain of the KL-transformed bands. In order to preserve watermark invisibility, embedding is achieved by modifying the magnitude of mid-frequency DFT coefficients according to an additive-multiplicative rule. Different weights are used for the three KL bands to further enhance invisibility. On the decoder side, KL decorrelation is exploited to optimally detect the watermark presence. More specifically, by relying on Bayes statistical decision theory, the probability of missing the watermark is minimized subject to a fixed false detection rate. Basing on the Neymann-Pearson criterion, the watermark presence is revealed by comparing a likelihood function against a threshold, if the former is above the latter the decoder decides for the watermark presence, otherwise such an hypothesis is rejected. Experimental results are shown proving the robustness of the algorithm against the most common image manipulations, and its superior performance with respect to conventional techniques based on luminance watermarking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multilevel Secure (MLS) systems require mandatory access control (MAC) including a linear lattice (total order) for information flow security. This paper describes an architecture for MLS networking that provides for MAC through key distribution. Tradeoffs in different communication modes are described, with attendant mechanisms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Layered Service Provider (LSP) is a mechanism available in Microsoft Windows 95 and Windows 98 to insert a protocol layer between the Winsock library calls and the transport layer of the network protocol stack. This paper discusses the use of encryption at the LSP to provide for security on wireless LANs that is transparent to the applications. Use of the LSP allows similarly transparent cryptographic isolation over any medium that may be accessed by the network protocol stack. Hardware-based cryptography in the form of Fortezza cards was used for this project, but the approach works just as well with software-based cryptography. The system was developed jointly by teams at the University of Florida in its Integrated Process and Product Design (IPPD) course and a liaison engineer at Raytheon Systems Division.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The location of an electromagnetic emitter is commonly estimated by intercepting its signal and then sharing the data among several platforms. Doing this in a timely fashion requires effective data compression. Previous data compression efforts have focused on minimizing the man- square error (MSE) due to compression. However, this criterion is likely to fall short because it fails to exploit how the signal's structure impacts the parameter estimates. Because TDOA accuracy depends on the signal's RMS bandwidth, compression techniques that can significantly reduce the amount of data while negligibly impacting the RMS bandwidth have great potential. We show that it is possible to exploit this idea by balancing the impacts of simple filtering/decimation and quantization and derive a criterion that determines an optimal balance between the amount of decimation and the level of quantization. This criterion is then used to show that by using a combination of decimation and quantization it is possible to meet requirements on data transfer time that can't be met through quantization alone. Furthermore, when quantization-alone approaches can meet the data transfer time requirement, we demonstrate that the decimation/quantization approach can lead to better TDOA accuracies. Rate-distortion curves are plotted to show the effectiveness of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this work is to analyze and compare H.263 resilience techniques for H.223-based real-time video transmission over narrow-band slow-fading channels. These channel conditions, which are typical for pedestrian video communications, are very critical, because they require Forward Error Correction (FEC), since data retransmission is not feasible, due to high network delay, and they reduce the effectiveness of FEC techniques- due to the bursty nature of the channel. In this work, two different strategies for H.263 video protection against channel errors are considered and compared. The strategies are tested over a slow-fading wireless channel, over which the H.263 video streams, organized and multiplexed by the H.223 Multiplex Protocol, are transmitted. Both standard FEC techniques considered by the H.223 recommendation for equal error protection of the video stream, and unequal error protection (UEP) through GOB synchronization are tested. The experimental results of this comparative analysis prove the superiority of the UEP technique for H.223-based video transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper promotes the use of near-lossless image compression and describes two DPCM schemes suitable for this purpose. The former is causal and is based on a classified linear-regression prediction followed by context-based arithmetic coding of the outcome residuals. It provides impressive performances, both with and without loss, especially on medical images. Coding time are affordable thanks to fast convergence of training. Decoding is always performed in real time. The latter is a noncausal DPCM and relies on a modified Laplacian pyramid in which feedback of quantization errors is introduced in order to upper bound reconstruction errors. Although the predictive method is superior for medium and high rates, the pyramid encoder winds for low rates and allows to encode and decode both in real time. Comparisons with block-DCT JPEG show that the proposed schemes are more than competitive also in terms of rate distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increased use of power- and space-constrained embedded processors in a wide variety of autonomous imaging and surveillance applications implies increased speed of computational resources that follow image acquisition in the processing stream. In such early vision applications, one typically processes an entire image data stream prior to spatial downselection operations such as focus-of-attention involving area-of-interest (AOI) selection. Downselection is especially useful in the emerging technologies of spatiotemporal adaptive processing (STA) and biomimetic automated target recognition (ATR). Here, progressive data reduction employs operations or sub-algorithms that process fewer data but in a more involved manner at each step of an algorithm. Implementationally, the STAP approach is amenable to embedded hardware using processors with deep pipelines and fine-grained parallelism. In part 1 of this series of two papers, we showed how compression of an image or sequence of images could facilitated more efficient image computation or ATR by processing fewer (compressed) data. This technique, called compressive processing or compressive computation, typically utilizes fewer operations than the corresponding ATR or image processing operation over noncompressed data, and can produce space, time, error, and power (STEP) savings on the order of the compression ratio (CR). Part 1 featured algorithms for edge detection in images compressed by vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST, which is a recently-reported block-oriented high- compression transform. In this paper, we continue presentation of theory for compressive computation of early processing operations such as morphological erosion and dilation, as well as higher- level operations such as connected component labeling. We also discuss the algorithm and hardware modeling technique that supports analysis and verification of compressive processing efficiency. This methodology emphasizes 1) translation of each image processing algorithm or operation to a prespecified compressive format, 2) determination of the operation mix M for each algorithms produced in Step 1), and 3) simulation of M on various architectural models, to estimate performance. As in part 1, algorithms are expressed in image algebra, a rigorous, concise notation that unified linear and nonlinear mathematics in the image domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate, consistent measurement of laryngeal size parameters is key to the quantification of phonatory disorders from laryngeal endoscopic imagery. Unfortunately, laryngoscopic images contain a wide variety of distortions introduced by endoscope optics (e.g., barrel distortion), systematic effects such as mucus strands adhering to the endoscope lens, electronic noise in the camera or digitization hardware, and color distortions resulting from optical, camera, or digitization errors. These difficulties are compounded by representational errors introduced during image archival or telemedicine-based manipulation of endoscopic imagery, e.g., when images are compressed, stored, then decompressed using lossy transformations. A variety of researchers, in particular Omeori et al., have studied the measurement of laryngeal parameters forma variety of image sources. Unfortunately, such analyses do not account for the effects of image compression/decompression. In this paper, previous research is extended to include estimation of errors in the measurement of parameters such as glottal gap area and maximum vocal fold length from compressed laryngoscopic imagery. Compression transforms studied include JPEG and EBLAST, a relatively recent development in high-compression image transformation for communication along low-bandwidth channels. Error analysis emphasizes preservation of spatial and greylevel information in the decompressed imagery, as well as error in parameter measurement at various compression ratios. Manual as well as automatic methods of laryngeal parameter extraction are analyzed, including techniques based on spectral restriction applied to moderate-resolution RGB imagery (320x200 pixels). The analysis presented herein represents work-in-progress, and is not intended to represent a final implementation suitable for medical diagnostic or life-critical applications, but is advanced as a phenomenological overview of measurement error in the presence of image compression in a medical imaging application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports about a study concerning the application of error bounded encoding to lossy image compression of ancient documents handwritten on parchments. Images are acquired in the RGB color space and previously transformed in the YUV color coordinate system before coding. The coding algorithm, named RLP, considered here is based on a classified DPCM enhanced by a fuzzy clustering initialization and followed by context based statistical modeling and arithmetic coding of prediction residuals that are quantized with user defined odd step sizes to allow rate control with a minimum peak error over the whole image, so as to exactly limit local distortions. Each YUV component is coded separately; after decoding images are transformed back to RGB color space and compared with the originals in order to quantify distortions. A relationship bounding the peak errors in the RGB color space one the peak error is fixed in the YUV color space is derived. An algorithm originally designed for estimating signal-dependent noise parameters and used to obtain useful information about the images of the documents is also reported in the paper. The performances of the coding method are superior with respect to conventional DPCM schemes thanks to its flexibility and robustness to changes in type of images. For the compression ratios requested by this application the gain of RLP over JPEG is consistent: nearly 2 dB and 5 dB in PSNR for compression ratios of 10 and 5 respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for corner detection, connected component extraction and document segmentation are developed and implemented for the JBIG encoded document images. These algorithms are based on the JBIG context model and progressive transmission properties. Since the core idea of the algorithms is to use the lowest resolution layer of any JBIG document image as the processing object, the time saving obtained by using these algorithms as compared to conventional algorithms which are based on fully uncompressed images are obvious. Experimental results based on the eight standard ITU images reveal that on the average our algorithms run faster than conventional algorithms by one to two orders of magnitude. The idea can also be extended to other coding schemes based on progressive coding and context model. These algorithms have applications in digital library, digital media storage and other domains in which automatic and quick document segmentation is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transforms are commonly used in signal processing to identify local signals in both the time and frequency domain. The application described in this paper uses this concept to show that wavelets of similar DNA sequences converge whereas wavelets of dissimilar DNA sequences diverge. To demonstrate this conclusion, several DNA sequences from different organisms were retrieved form John Hopkins University's Genome Database. Statistical tests were applied to these sequences to measure the degree of similarity. Subsequently, a series of wavelet transforms were applied to the DNA sequences. As a result, the wavelet transforms were found to converge on sequences containing identical proteins and were found to diverge on sequences containing dissimilar ones. A description of the algorithm and statistical tests are provided in addition to analytical results. Application of this wavelet analysis technique has shown to be more efficient than the standard homology search algorithms currently being used. The algorithm has O(Nlog(N)) time complexity whereas standard search algorithms have O(N2) Time complexity. Hence, wavelet transforms can be used to quickly locate or match common protein coding in DNA sequences form large medical databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transmission and storage of digital video currently requires more bandwidth than is typically available. Emerging applications such as video-on-demand, web cameras, and collaborative tools with video conferencing are pushing the limits of the transmission media to provide video to the desktop computer. Lossy compression has succeeded in meeting some of the video demand, but it suffers from artifacts and low resolution. This paper introduces a content-dependent, frame-selective compression technique which is developed wholly as a preconditioner that can be used with existing digital video compression techniques. The technique is heavily dependent on a priori knowledge of the general content of the video which uses content knowledge to make smart decisions concerning the frames selected for storage or transmission. The velocital information feature of each frame is calculated to determine the frames with the most active changes. The velocital information feature along with a priori knowledge of the application allows prioritization of the frames. Frames are assigned priority values with the higher priority frames being selected for transmission based on available bandwidth. The technique is demonstrated for two applications: an airborne surveillance application and a worldwide web camera application. The airborne surveillance application acquires digital infrared video of targets at a standard frame rate of 30 frames per second, but the imagery suffers from infrared sensor artifacts and spurious noise. The web camera application selects frames at a slow rate but suffers form artifacts due to lighting and reflections. The results of using content-dependent, frame-selective video compression shows improvement in image quality along with reduced transmission bandwidth requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance imaging applications on small autonomous imaging platforms present challenges of highly constrained power supply and form factor, with potentially demanding specifications for target detection and recognition. Absent of significant advances in image processing hardware, such power and space restrictions can imply severely limited computational capabilities. This holds especially for compute-intensive algorithms with high-precision fixed- or floating- point operations in deep pipelines that process large data streams. Such algorithms tend not to be amenable to small or simplified architectures involving (for example) reduced precision, reconfigurable logic, low-power gates, or energy recycling schemes. In this series of two papers, a technique of reduced-power computing called compressive processing (CXP) is presented and applied to several low- and mid-level computer vision operations. CXP computes over compressed data without resorting to intermediate decompression steps. As a result of fewer data due to compression, fewer operations are required by CXP than are required by computing over the corresponding uncompressed image. In several cases, CXP techniques yield speedups on the order of the compression ratio. Where lossy high-compression transforms are employed, it is often possible to use approximations to derive CXP operations to yield increased computational efficiency via a simplified mix of operations. The reduced work requirement, which follows directly from the presence of fewer data, also implies a reduced power requirement, especially if simpler operations are involved in compressive versus noncompressive operations. Several image processing algorithms (edge detection, morphological operations, and component labeling) are analyzed in the context of three compression transforms: vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST. The latter is a lossy high-compression transformation developed for underwater communication of image data. Theory is primarily based on our previous research in compressive target detection and recognition. The modeling technique that supports analysis and verification of claims emphasizes 1) translation of each algorithm to a given compressive format, 2) determination of the operation mix M for each algorithm produced in Step 1), and 3) simulation of M on various architectural models, to estimate performance. Where possible, algorithms are expressed in image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra has been implemented on a wide variety of workstations and parallel processors, thus increasing portability of the algorithms presented herein.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The implementation of an image compression transform on one or more small, embedded processors typically involves stringent constraints on power consumption and form factor. Traditional methods of optimizing compression algorithm performance typically emphasize joint minimization of space and time complexity, often without significant consideration of arithmetic accuracy or power consumption. However, small autonomous imaging platforms typically require joint optimization of space, time, error (or accuracy), and power (STEP) parameters, which the authors call STEP optimization. In response to implementational constraints on space and power consumption, the authors have developed systems and techniques for STEP optimization that are based on recent research in VLSI circuit design, as well as extensive previous work in system optimization. Building on the authors' previous research in embedded processors as well as adaptive or reconfigurable computing, it is possible to produce system-independent STEP optimization that can be customized for a given set of system-specific constraints. This approach is particularly useful when algorithms for image and signal processing (ISP) computer vision (CV), or automated target recognition (ATR), expressed in a machine- independent notation, are mapped to one or more heterogeneous processors (e.g., digital signal processors or DSPs, SIMD mesh processors, or reconfigurable logic). Following a theoretical summary, this paper illustrates various STEP optimization techniques via case studies, for example, real-time compression of underwater imagery on board an autonomous vehicle. Optimization algorithms are taken from the literature, and error profiling/analysis methodologies developed in the authors' previous research are employed. This yields a more rigorous basis for the simulation and evaluation of compression algorithms on a wide variety of hardware models. In this study, image algebra is employed as the notation of choice. Developed under DARPA and Air Force sponsorship at University of Florida, image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra has been implemented on numerous workstations, parallel processors, and embedded processors, several of which are modeled in this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a fast algorithm is developed which reduces the searching space for Fractal image coding. The basic idea is to classify the domain pool into three classes, non- edged class, horizontal/vertical class and the diagonal class. For each given range block, the property is computed first to determine which class it belongs. Then one only has to search from the corresponding class in the domain pool to find the best match. The classification operation is performed only according to the lowest frequency coefficients of the given block in the horizontal and vertical directions, in which the frequency data is computed from Discrete Cosine Transform (DCT). The main advantages for this classification scheme are that the classification mechanism is simple and the DCT algorithm is easy to implement. A simulation shows that, the proposed fast algorithm is about 2 times faster than the baseline method while the quality of the retrieved image is almost the same.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As an important analysis tool, wavelet transform has made a great development in image compression coding, since Daubechies constructed a kind of compact support orthogonal wavelet and Mallat presented a fast pyramid algorithm for wavelet decomposition and reconstruction. In order to raise the compression ratio and improve the visual quality of reconstruction, it becomes very important to find a wavelet basis that fits the human visual system (HVS). Marr wavelet, as it is known, is a kind of wavelet, so it is not suitable for implementation of image compression coding. In this paper, a new method is provided to construct a kind of compactly supported biorthogonal wavelet based on human visual system, we employ the genetic algorithm to construct compactly supported biorthogonal wavelet that can approximate the modulation transform function for HVS. The novel constructed wavelet is applied to image compression coding in our experiments. The experimental results indicate that the visual quality of reconstruction with the new kind of wavelet is equivalent to other compactly biorthogonal wavelets in the condition of the same bit rate. It has good performance of reconstruction, especially used in texture image compression coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The efficient computation of high-compression encoding transforms is key to transmission of moderate- or high- resolution imagery along low- to moderate-bandwidth channels. Previous approaches to image compression have employed low-compression transforms for lossless encoding, as well as moderate compression for archival storage. Such algorithms are usually block-structured and thus tend to be amenable to computation on array processors, particularly embedded SIMD meshes. These architectures are important for fast processing of imagery obtained from airborne or underwater surveillance platforms, particularly in the case of underwater autonomous vehicles, which tend o be severely power-limited. Recent research in high-compression image encoding has yielded a variety of hierarchically structured transforms such as EPIC, SPIHT, or wavelet based compression algorithms, which unfortunately do not map efficiently to embedded parallel processors with small memory models. In response to this situation, the EBLAST transform was developed to facilitate transmission of underwater imagery along noisy, low-bandwidth acoustic communication channels. In this second part of a two-part series is presented implementational issues and experimental results form the application of EBLAST to a database of underwater imagery, as well as to common reference images such as lena, baboon, etc. It is shown that the range of EBLAST compression ratios (100:1 < CR < 250:1) can be maintained with mean- squared-error (MSE) less than five percent of full greyscale range, with computational efficiency that facilitates video- rate compression with existing off-the-shelf technology at frame sizes of 512x512 pixels or less. Additional discussion pertains to postprocessing steps that can render and EBLAST-decompressed image more realistic visually, in support of human target cueing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.