PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Understanding how atmospheric turbulence is distributed along a path helps in effective turbulence compensation and mitigation. Phase-based techniques to measure turbulence have potential advantages when used over long ranges since they do not suffer from saturation issues as the irradiance-based techniques. In an earlier work, we had demonstrated a method to extract turbulence information along a path using the time-lapse imagery of a LED array from a pair of spatially separated cameras. By measuring the differential motion of pairs of LEDs of varying separations, sensed by a single camera or between cameras, turbulence profiles could be obtained. However, by using just a pair of cameras, the entire path could not be profiled. By using multiple spatially separated cameras, improvements can be made on the profiling resolution as well as the fraction of the path over which profiling is possible. This idea has been demonstrated in the present work by using a camera bank comprising of 5 identical cameras, looking at an arrangement of 10 nonuniformly spaced LEDs over a slant path. The differential tilt variances measured at a single camera and between all pairs of cameras have been used to obtain turbulence information. Profiling thus with elevated targets will ultimately help in a better understanding of how turbulence varies with altitude in the surface layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This effort characterizes proper sampling of laser speckle in wave-optics simulations, with an emphasis on active imagers in outdoor environments. Modeling of performance degradations induced by speckle is critical in the design of such devices. We expose tradeoffs between sampling conditions in multiple planes of interest, namely the object, pupil and focal planes of an imaging system. The goal of our analysis is to develop an optimized numerical tradespace that models the underlying physics of speckle and turbulence with high fidelity. We begin by showing that speckle statistics are relatively straightforward to produce in the case of vacuum propagation. Then by propagating through different strengths of turbulence, we demonstrate how sampling requirements can become much more difficult to satisfy. We pay particular attention to the problem of sufficiently sampling a target object without subjecting it to anisoplanatism. As a way of overcoming such challenges, we propose and test an optimization routine that defines acceptable simulation parameters based on user-defined physical parameters. Successful implementation of this approach streamlines the design process for applications that involve active target tracking and coherent imaging through turbulence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sonic anemometers are used to study the outer scale in near ground level turbulence. Turbulence is expected to obey a Kolmogorov power spectrum within some inertial range, where the temperature or index of refraction fluctuations decrease as the inverse 11/3rds power of the spatial wavenumber. Below this inertial range (that is for sufficiently small spatial wavenumbers, or equivalently sufficiently large scale sizes) the form of the power spectrum isn’t predicted by theory, but it is expected to roll off. A levelling off of the power spectrum at low spatial frequencies corresponds to a levelling off of the structure function at large spatial separations, and this is the signal sought in the data. Near the ground there is some evidence the outer scale size may be as small as the height above ground. Sonic anemometer data was collected in the summer of 2019 in conjunction with optical turbulence experiments. These experiments showed good agreement between different ways of monitoring turbulence. In these experiments, the sonic anemometers were mostly mounted 2.64 meters above the ground. In this work, the anemometer data is being revisited to study the outer scale. Outer scale effects are quite subtle with optical techniques, which are arranged to be most sensitive to variations in index of refraction within the inertial range precisely in order to avoid inner and outer scale effects. Sonic anemometry usually achieves this by including only nearest neighbor measurements in turbulence estimation, but here we examine the variance of temperature differences across a wide range of baselines in order to study the structure function itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the typical analysis of aero-optical wavefront data, the three lowest order spatial modes are removed from the experimental data. These three spatial modes (tip, tilt, and piston) are commonly corrupted by mechanical disturbances. In this work an algorithm was developed that takes advantage of the advective nature of aberrations to compensate for the tip, tilt, and piston removal common in experiment. The algorithm is able to recover the aero-optical component of the jitter and provide time series of global tilt free of mechanical disturbances. This algorithm is called the stitching method. Experiments were conducted in Notre Dame’s Tri-sonic Wind Tunnel (TWT) Facility. Optical wavefront measurements were conducted on a Mach 0.6/0.1 shear layer. Voice coil actuators were placed on the shear layer splitter plate to regularize the shear layer. The predicted results for the RMS of the aero-optical jitter from the stitching method matched well with modeled results. Since the stitching method produces full time series of global tilt, energy spectra were also computed and presented. This information can be used by systems designers to benchmark fast steering mirrors for use in airborne directed energy systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical measurements of a hemispherical turret were conducted in both a wind tunnel and airborne testing environment to measure aero-mechanical jitter imposed onto a laser beam. A hemispherical turret was positioned in the freestream flow at various protrusion distances, Mach numbers, and azimuthal angles. Lasers and accelerometers were used to quantify the mechanical contamination imposed onto the beam due to the fluid-structure interaction of the incoming freestream flow and the protruding turret body. The results from wind tunnel and in-flight testing were compared. It was shown that the wind tunnel and in-flight tests yielded different results both quantitatively and qualitatively. The possible reasons for the discrepancies between these testing campaigns were also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface pressure measurements were taken on a hemisphere-on-cylinder turret in a wind tunnel using pressure sensitive paint and fast response pressure transducers. Four different turret protrusion distances were tested to study the characteristics of the unsteady pressure field on the backside and wake of the turret. Proper orthogonal decomposition was used to identify the dominant spatial surface pressure modes acting on the turret in this parametric study. It was found that the further the turret protruded into the freestream flow, the more the surface pressure field became dominated by spanwise antisymmetric surface pressure distributions resulting from anti-symmetrical vortex shedding at a normalized frequency of approximately StD=0.2. For the case of the partial hemisphere, this anti-symmetrical vortex shedding was essentially absent, insinuating that at some protrusion distance, the surface pressure environment on the turret fundamentally changes. The normalized net force rms was calculated on the turret for each configuration. It was found that the greater the turret protrusion, the greater the net force acting in the spanwise direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavefront compensation techniques that do not require wavefront sensors are demanded in the on-orbit telescopes on Earth observation satellites. This is especially true for segmented or sparse aperture telescopes that could realize unprecedented high angular resolution. A promising wavefront sensorless approach is the stochastic parallel gradient descent (SPGD) algorithm. The wavefront correction by the SPGD optimization relies only on the intensity data in the acquired image. However, many previous observation targets are point light sources, not the extended ground scenes generally acquired by Earth observation satellites. This paper derives an efficient wavefront control method for imaging systems for the fast SPGD optimization. Wavefront compensation has been demonstrated by experiment on extended objects in single aperture optics, in which a microelectrome- chanical system deformable mirror controls the wavefront. Subsequent numerical simulations are reported for multi-aperture imaging systems. The paper also discuses a method to reduce the computational cost of SPGD optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common non-mechanical method for generating wide-angle, high-resolution 3D images is to use two multi-megapixel cameras to capture wide field of view (FOV) stereoscopic images. Such images, when viewed by a human, provide detailed 3D information that can easily be used to plot a course or avoid an obstacle. For a robot or autonomous vehicle, however, it takes considerable computation to convert the imagery into data that can be used for navigation and control. This processing demand can be an issue for small platforms needing real-time 3D data in a dynamic operating environment. With 3D time-of-flight (TOF) sensors (indirect TOF cameras and lidars), depth information can be acquired with minor processing, but high resolution over a large angle is not readily and inexpensively achieved without steering the illumination source, or receiver, or both. Mechanical beam steering systems (including MEMS) have been the answer to this problem for many years, but a truly no-moving-parts solution, using polarization gratings (PGs) combined with liquid crystal (LC) switches,1 offers some unique features while reducing costs when scaled to large volume manufacturing. This paper discusses the advancement and demonstration of wide-angle, large-aperture PG-based scanners incorporated into TOF sensors to improve resolution and range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hybrid Wave-front Sensor (HyWFS) has previously been developed as a combination of a Pyramid Wave-front Sensor (PyWFS) and a Shack-Hartmann Wave-front Sensor (SHWFS) to capture the desirable properties of each when operated with an unresolved guide beacon. A pyramid prism placed at a focus divides the beacon light into four beams. At a reimaged pupil, a lenslet array creates four separate spot patterns on a detector. The measured intensities may be analyzed both in the manner of a PyWFS and a SHWFS, generating two approximations of the wave front that together achieve the high sensitivity of the PyWFS and the high dynamic range of the SHWFS. Given its inherent sensitivity, calibrating the HyWFS is challenged by the effects of local vibrations and air currents in the laboratory. To overcome this problem, a prototype HyWFS has been built that features a closed loop tip-tilt control sub-system. The design includes additional pupil planes, a Fast Steering Mirror (FSM), and a tip-tilt sensor. The prototype HyWFS will be calibrated with low-order Zernike polynomials at a variety of amplitudes to confirm the sensor’s sensitivity, dynamic range, and the effectiveness of the tip-tilt control loop. The effect of the tip-tilt loop will be quantified by comparing calibration qualities while the loop is active and inactive. The residual wave-front error is anticipated to decrease with active tip-tilt control in both the PyWFS mode and the SHWFS mode. With improved accuracy, the HyWFS is another step closer to on sky operation in a closed loop adaptive optics system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present design, fabrication, and characterization results on a novel multi-spectral imaging lens. Our lens design combines a catadioptric geometry with a reflective tunable cholesteric liquid crystal cell. The lens images in a narrow (~50 nm) spectral band and can be continuously voltage-tuned over a broad wavelength range (>;400 nm). The lens works with circularly polarized light. A cholesteric liquid crystal cell reflects light if the incident circular polarization has the correct handedness and the wavelength is closely matched with the liquid crystal pitch. Otherwise, the light transmits unaffected. Our compact optical design converts this reflective mechanism into a transmissive one for the purpose of imaging. In addition, we use corrective refractive lenses to optimize imaging performance over wide field of view. The traditional approach would be to use tunable birefringent filter (e.g. Solc and Lyot filter) in conjunction with a broadband imaging lens, with the birefringent filter itself composed of multiple liquid crystal cells and polarizers. Tunability is provided by adjusting optical retardation of nematic liquid crystal cells through application of appropriate voltages. Our design uses a single liquid crystal cell and a single polarizer and is inherently high in optical transmission, significantly less complex and thus potentially low cost. The applications include forensic imaging, multispectral aerial surveys, dermatology, medical microscopy etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper uses wave-optics simulations with weak-to-strong scintillation conditions to model the performance of a digital-holography wavefront sensor (DH-WFS). Via Monte Carlo analysis, these simulations predict the optimal signal and reference strengths of a DH-WFS in the off-axis pupil plane recording geometry (PPRG). Incidentally, the Monte Carlo analysis shows that, despite conventional knowledge, we cannot directly relate the signal-to-noise ratio and the field-estimated Strehl ratio to the hologram fringe visibility. Such results are directly inform ongoing experimental efforts on how to design and build a DH-WFS in the off-axis PPRG to properly handle weak-to-strong scintillation conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging through deep-atmospheric turbulence is a challenging and unsolved problem. However, digital holography (DH) has recently demonstrated the potential for sensing and digitally correcting moderate turbulence. DH uses coherent illumination and coherent detection to sense the amplitude and phase of light reflected off of an object. By obtaining the phase information, we can digitally propagate the measured field to points along an optical path in order to estimate and correct for the distributed-volume aberrations. This so-called multi-plane correction is critical for overcoming the limitations posed by moderate and deep atmospheric turbulence. Here we loosely define deep turbulence conditions to be those with Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. Furthermore, we define moderate turbulence conditions to be those with Rytov numbers between 0.1 and 0.75 and with isoplanatic angles at least a few times larger than the diffraction-limited viewing angle. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using single-shot DH data (i.e., a single holographic measurement). This approach uniquely demonstrated the ability to correct distributed-volume turbulence in the moderate turbulence regime using only single-shot data. While the DH-MBIR algorithm pushed the performance limits for single-shot data, it fails in deep turbulence conditions. In this work, we modify the DH-MBIR algorithm for use with multi-shot data and explore how increasing the number of measurements extends our capability to sense and correct imagery in deep turbulence conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image sharpening is a proven method to reconstruct 2D and 3D images in the presence of distributed volume turbulence with coherent illumination. Non-uniform illumination due to scintillation arises when illuminating a distant object through distributed volume turbulence and thus changes the underlying intensity pattern that is to be reconstructed. This paper examines the performance trends that arise when performing 3D image sharpening on a distant object with scintillated illumination patterns. We find that range images are relatively robust to scintillation and show how the quality of reconstructed range images decreases with increasing turbulence strength and when scintillated illumination is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new technique combining digital holography and tomography to make path-resolved measurements of a turbulent air volume. An array of optical beams at different propagation angles is transmitted through the volume under test, interfered with a reference beam, and the resulting fringe patterns are all simultaneously recorded on a focal plane array. The complex field of each beam in the receiver pupil plane is recovered via digital Fourier processing and then tomographic reconstruction algorithms are applied to calculate wavefront error contributions from multiple planes along the beam path. Results are presented from a proof-of-concept laboratory experiment using phase screens to mimic the turbulence volume. Comparing the tomographic reconstructions to the known screen prescriptions, we successfully demonstrate accurate measurement of the phase distortions introduced in multiple planes. This ability to longitudinally resolve turbulence and isolate individual layers may benefit numerous applications including precision turbulent flow analysis in wind tunnels and adaptive optic compensation for free space optical communications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Availability of multi-pixel short-wave infra-red (SWIR) Geiger-mode avalanche photodiode (GmAPD) light detection and ranging (LIDAR) receivers have enabled unique detection and imaging capabilities in commercial and government platforms. Specific applications of this technology include long range target detection, acquisition, tracking, 3D mapping, optical communication, as well as intelligence, surveillance, and reconnaissance (ISR) missions capable of passive, direct, and coherent detection. This work will review the status of SWIR GmAPD cameras for various system applications. Technical specifications of the synchronous and asynchronous cameras enabled by high detector sensitivity and low dark count rate will be presented. Furthermore, details of camera functions and detector experimental performance (e. g. Format, Photon Detection Efficiency, Dark Count Rate, Wavelength, Timing) will be discussed. Combining single photon sensitive pixels, each capable of precise photon arrival timing enables LIDAR systems capable of unparalleled engagement ranges and non-conventional imaging methods. Range-echo imaging, non-line of sight detection, and other novel sensing techniques will be discussed specifically in systems challenged by platform’s size, weight, and power (SWaP). Application of measured Geiger-mode performance in various extended range scenarios will be reviewed for ground, tactical, space, and surveillance applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Random phase errors due to atmospheric fluctuations are a ubiquitous challenge for ground based optical imaging interferometers. We present methods for dealing with these atmospheric phase errors to improve image reconstruction algorithms. The first method utilizes a scale-and-linear-phase-invariant error metric during nonlinear optimization. This method is prone to stagnation in local minima. The second method is a global linear phase correction that is applied prior to image reconstruction, either in fringe processing or as a simple preprocessing step to image reconstruction. This phase calibration method, like the baseline bootstrapping concept, is possible only with certain beam combination configurations and requires multispectral measurements. This phase correction is coarse but provides a solution within the capture range of the nonlinear optimization of the first method. Using both methods results in a simplified image reconstruction algorithm that produces a high-fidelity reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lidar is an optical technology for detection and range measurements, which has been attracting much attention in developments of next generation driving and navigation. Herein, we report on development of a type of non-scanning, non-solid-state, laser-based infrared lidar system with the potential applications in advanced driver assistance systems and autonomous vehicles. Our emphasize in the design approach has been on compactness of the final system in order for it to be deployable both as standalone or complimentary to existing lidar sensors, enabling fusion sensing in automotive or even in drone applications. The non-scanning lidar system, currently patent pending, is comprised of a laser light source, a plurality of optical elements to create a predefined reference optical pattern, means for filtering returned optical signals, imaging optics, an optical detector, and a processing unit. The principle of this system is largely based on image processing with a known and calibrated reference. Using Python and OpenCV, the near infrared images acquired from the entire field of view is analyzed in real-time to determine the position and velocity of the objects. The work presented here describes the principle of our new lidar system, the optical system design, as well as the experimental results demonstrating its performance. The benefits and limitations of the imaging lidar technology developed by us are compared to those of the current scanning and flash lidars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The coherence of light will be destroyed when propagate through scattering medium, which scrambles the transmitted light and thus forms speckle pattern behind. In order to understand the scattering process and implement applications such as imaging through scattering medium, theory for transmission matrix, eigenvectors have been studied and exploited. However, the Huygens–Fresnel principle implies that regional local effect of scattering process is existing, which could be involved in spatially engineered effective scattering particles, we name it scattering tunnels. We demonstrate the scattering tunneling effect in our experiment and implement this effect to achieve super resolution modes manipulation on input plane of scattering system. The scattering tunnels might introduce another perspective to study the scattering process, and provide researchers a way to improve efficiency of light control through scattering medium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A block-based, streaming, multi-frame blind deconvolution (MFBD) mitigation method is presented for restoration of imagery impacted by space-variant atmospheric turbulence. An incremental approach is taken, referred to as “streaming”, which operates on each new frame as it arrives. For each new frame, an optimization is initiated to minimize the error between the new frame and forward modeling of the object and point spread function (PSF). To adapt to space-variant turbulence conditions, the fundamental units of operation are small, overlapping blocks of pixels extracted from the entire frame. The optimization for each new block is seeded using the solution from the same block area in the previous frame to produce a cumulative, improved, block solution. For each block, the algorithm implements stochastic gradient descent, and alternates between seeking a solution for the PSF and the object, while holding the other constant. To assist in regularizing the PSF solutions during the processing, the PSF estimate is projected onto a sparse dictionary representation of PSFs. The block solutions are combined using an overlap/add method to produce the object estimate after each frame. Our method can utilize either a Picard iterative process or a limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm with bound constraints. A comparison of results is presented using data simulated with high-fidelity to real systems and environments. While incremental methods for blind deconvolution have been reported in the literature, our method has been developed explicitly for the routine mechanics of blind deconvolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A snapshot hyperspectral imaging architecture is presented which forgoes time-based scanning through use of a sensor diode and liquid crystal arrays for amplitude modulation. Incident light is partitioned into discrete image pixels, frequency encoded, via projection onto spatial and spectral modulation liquid crystal arrays, with resultant sum and difference frequency components manifesting from optical mixing. A hyperspectral image is reconstructed by means of a Fourier analysis which uncovers the associated frequency components according to each pixel modulation frequency. The presented snapshot hyperspectral imaging architecture is investigated in terms of the optical geometry, theoretical and experimental operation, and substantiated via simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11836 including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.