PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12136, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a medical use, ultrastructure of muscle is currently revealed by images of resected samples achieved thanks to Electron Microscopy (EM), requiring freezing and paraffin sections, with a set of histological, molecular and biochemical analyses. The resection, slicing and labelling steps cause an alteration of the phenotypic and volumetric information compared to their initial integrity. Starting from this statement, we have developed an original pipeline resting on the imbrication of an optical and computational strategy for imaging 3D biomedical structures without resorting to slicing, freezing and labelling steps. The assembly of myosin of a whole muscle is revealed thanks to the second harmonic generation (SHG) recorded with a multiphoton microscope, all along the entire 180 μm of thickness of the Extensor digitorum longus (EDL) of a wild mouse. During the SHG recording, the Point-Spread-Function (PSF) of the multiphoton microscope is recorded all along the imaging depth. This step highlights an axial broadening of the PSF while maintaining a constant planar PSF throughout the whole depth of the recordings. Then, a fitting algorithm estimated a mathematical model of the PSF, highlighting its variability into the whole image depth. Finally, a computational image restauration is led thanks to the fast image deblurring algorithm BD3MG accounting for the depth variant PSF. The non-stationary distortions all along the recorded image is employed thus correcting accurately the image distortion. The axial organization of the myosin is revealed for the first time, highlighting tubular organization of myosin into the myofibrils.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The in-line configuration of digital holographic microscopy is the simplest to set up, but it requires numerical reconstructions, in order to retrieve the phase of the wave diffracted by the sample. These reconstructions are based on an image formation model, but the effect of the microscopy system is often neglected. Yet, some parameters, like a wrong magnification, the partial coherence of the illumination, or optical aberrations may lead to bias in the reconstruction. In the framework of inverse problems approaches, we analysed and studied the effects of some of these parameters using simulations and experiments on calibrated spherical objects and a rigorous model (Lorenz-Mie), in order to evaluate the relevance and requirements of model refinements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of interfacial structures is of utmost importance not only for various research fields such as cell biology and display systems but also their sub-disciplines. One of the traditional means of imaging buried structures rely on the use optical sectioning with superresolution microscopy. Although it exceeds diffraction limit in resolution, there are various shortcomings to utilize this methodology such as its reliance on fluorescent markers, long exposure times to high cost of the imaging system. Ultimately, these limitations position the existing technologies unideal for live cell imaging, including the imaging of surface proteins of a living cell. A label free quantitative phase imaging method is realized in this project to enable imaging of an interface between different media. This system is based on an off-axis holographic microscope and uses a high numerical aperture (NA) microscope objective to achieve total internal reflection (TIR). Existing literature on total internal reflection holographic microscopy utilizes prism to achieve TIR which limits the working distance of objective hence magnification. Our system relies on a 100x objective with 1.49 NA to improve resolution and magnification. Complex field which is reflected from the sample can be recovered by using digital holography principles. The resolution of the system can further be enhanced by combining several illumination angles and utilizing synthetic aperture reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffraction limited phase imaging of microscopic reflective and transmitting samples is achieved by processing multiple intensity diffraction pattern (up to 125 patterns). In order to show the potential of the technique, different phase/amplitude microscopic samples were used in the experiments. Coherent and partially coherent illuminations of the sample (using laser and LED light sources) are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarization as an intrinsic property of electromagnetic waves is useful in microscopy imaging of anisotropic samples, specifically for studying the phase information and birefringence properties of samples. We used polarization-sensitive Fourier-ptychography method (pFPM) to construct birefringence map to show the optically anisotropy of the sample under test in a large field of view with higher resolution. The optic-axis orientation and phase retardation of the sample are extracted in a quantitative manner. The images are collected through a 10X objective lens with numerical aperture (NAobj) = 0.28 using a polarization sensitive camera. The pFPM phase retrieval algorithm is used to reconstruct the object with 548 nm resolution and synthetic NA (NAsynth) = 0.54. The novelty in our approach lies in combining pFPM with a dome-shaped illumination, showing promise for high-performance quantitative polarimetric imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The in-line X-ray phase contrast imaging technique relies on the measurement of Fresnel diffraction intensity patterns due to the phase shift and the absorption induced by the object. The recovery of both phase and absorption is an ill-posed non-linear inverse problem. In this work, we address this problem with an iterative algorithm based on a primal-dual method, which allows us to introduce the non-linearity of the forward operator. We used a variational approach with different regularizations for the phase and absorption, in order to take into account the specificities of each quantity. Assuming the solution to be piecewise constant, the functional used involves the Total Generalized Variation (TGV) as well as the classical Total Variation (TV), which enables a compromise between sharp discontinuities and smoothness in the solution. This optimization problem is solved efficiently using primal-dual approach such as Primal-Dual Hybrid Gradient Method (PDHGM). From this approach, we propose an algorithm called PDGHGM-CTF, which is based on the linearized Contrast Transfer Function model, that we generalize for the nonlinear problem to get the Non-Linear Primal-Dual Hybrid Gradient Method (NL-PDHGM). The proposed iterative algorithms are able to recover simultaneously the phase and absorption from a single diffraction pattern without homogeneity assumption or support constraint, and the nonlinear version is valid without restriction on the object. Moreover, we show that the approach is robust with respect to the initialization. While giving a good approximation as starting point reduced the convergence time, it did not improve the reconstruction results. We demonstrate the potential of the proposed algorithms on simulated datasets. We show that it produces reconstructions with fewer artifacts and improved normalized mean squared error compared to a gradient descent scheme. We evaluate the robustness of the proposed algorithm by evaluating the reconstruction on simulated images of 1 000 random objects, given the same hyperparameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ptychography is a lensless coherent diffraction imaging method that retrieves both the amplitude and the phase from a set of diffraction patterns. The phase-contrast imaging capacity combined with the unique penetration ability of THz radiation offers potential for applications such as biomedical imaging and nondestructive testing. We present two optimization strategies that allow THz ptychography to achieve efficiently a large field of view (FOV) with high resolution. We show that using a larger probe beam paired with a proportionally enlarged scanning step increases the imaging FOV without causing the imaging quality degradation as long as the overlap ratio is respected. Thus, a centimeter-scale object can be imaged with reasonable numbers of scan positions. We create a structural illumination by simply inserting a porous polymer foam that acts as a diffuser. The presence of the diffuser offers a higher spatial resolution and a better reconstruction for pure-phase objects. We experimentally demonstrate the reconstruction improvement with both an amplitude-contrast USAF target sample and a phase-contrast sample. The proposed THz ptychographic setup successfully images the phase variation distribution of a sample consisting of paraffin-embedded human breast cancer tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Peripheral vision not only plays a vital role in daily visual tasks, such as locomotion and detection but there is also the hypothesis that peripheral refraction could influence eye growth and myopia development. In 1971 Hoogerheide et al. suggested an increased risk for humans to become myopic if the peripheral refractive errors tend to be hyperopic, i.e., positive relative peripheral refraction (RPR). The hypothetical link discovered between peripheral refraction and myopia development has opened a series of scientific investigations to confirm the theory and understand the underlying foundations. In this way, high-quality peripheral refractometry has gained importance in the study of myopia. Clinical aberrometers are efficient and robust instruments for measuring wavefront error for central vision; however, to measure aberrometry in the peripheral field, several difficulties arise that prevent standardization for clinical use. In the present work, we develop a new type of scanning aberrometer to improve and simplify the system for the analysis of peripheral refraction. Four physical eye models were made to provide a stable sample resembling a human eye and validate the new methodology. The purpose of this study is to investigate the characteristics of the current system to determine the factors that limit the employability of the instrument, and it is aimed at the development of the gold standard technology for peripheral refraction measurement, making the instrument more economical, simple to use and offering the highest possible measurement quality. The validation has been done by a comparative analysis between theoretical and experimental results showing good correlation. The results of this study will provide us with helpful information when conducting studies in human eyes using this new apparatus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks present a new approach for solving nonlinear problems and is widely applied in optical science. In this research, we integrate neural network with Shack-Hartmann wavefront sensor (SHWS), not only reconstruct the wavefront but also the intensity of beam profile. This network has the capability to obtain the beam wavefront information without calculating the slop of wavefront, which cost most of the time in traditional algorithm, and also grab the features of beam intensity distribution simultaneously. We also compare the result of reconstruction using single focal spot. The experimental results show that though SHWS pattern training result has slightly better root mean square (RMS), both the reconstructions have high accuracy in wavefront and beam profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need of high-resolution Earth Observation (EO) images for scientific and commercial exploitation has led to the generation of an increasing amount of data with a material impact on the resources needed to handle data on board of satellites. In this respect, Compressive Sensing (CS) can offer interesting features in terms of native compression, onboard processing and instrumental architecture. In CS instruments the data are acquired natively compressed by leveraging on the concept of sparsity, while on-board processing is offered at low computational cost by information extraction directly from CS data. In addition, instrument’s architecture can enjoy super-resolution capabilities that ensure a higher number of pixels in the reconstructed image with respect to that natively provided by the detector. In this paper, we present the working principle and main features of a CS demonstrator of a super-resolved instrument for EO applications with ten channels in the visible and two channels in the medium infrared. Besides the feature of merging in a single step the acquisition and compression phases of the image generation, its architecture allows to reach a superresolution factor of at least 4x4 in the images reconstructed at the end of process. The outcome of the research can open the way to the development of a novel class of EO instruments with improved Ground Sampling Distance (GSD) - with respect to that one provided natively by the number of sensing elements of the detector - and impact EO applications thanks to native compression, on-board processing capabilities and increased GSD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aims at studying the compressive sensing (CS) applied to a pushbroom scanning technique for earth observation. Specifically, we studied the effect of the satellite movement on the CS reconstruction performances in a context of a future space mission devoted to the monitor of the cryosphere. Starting from real earth images, we first simulated the CS acquisition in the static situation, where we evaluated the effect of the selected CS algorithms and the accuracy of image reconstruction in different wavelength bands. Then we extended the reconstruction algorithms to pushframe acquisitions, i.e., static images processed line-by-line, and pushbroom acquisitions, i.e., moving frames, which consider the payload displacement during acquisition. A parallel analysis on the classical pushbroom acquisition strategy is also performed for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique for multispectral fluorescence lifetime imaging with high spatial resolution by combining both single-pixel and data fusion imaging techniques. The system relies on the combined use of three different sensors: two SP cameras capturing multispectral and time-resolved information, and a conventional 2D array detector capturing high spatial resolution images The resultant giga-voxel 4D hypercube is acquired in a fast manner measuring only 0.03% of the dataset. The fusion procedure is done by solving a regularization problem which is efficiently solved via gradient descent. The system can be used to identify fluorophore species.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a lock-in method to increase the phase contrast in incoherent Differential Phase Contrast (DPC) imaging. The use of a smart pixel detector with in-pixel signal demodulation, paired with synchronized illumination, provides the basis of a bit-efficient approach to DPC. The experiments show an increased sensitivity by a factor of 8, for equivalent standard DPC measurements; single-shot sensitivity of 0.7 mrad at a frame rate of 1400 fps is demonstrated. This new approach may open the way for the use of incoherent phase microscopy in biological applications where extreme phase sensitivity and millisecond response time is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light field fluorescence microscopy (LFM) can provide three-dimensional (3D) images in one snapshot, but essentially lighting up the entirety of the sample, even though only a part of the sample is meaningfully captured in the reconstruction. Thus, entire illumination introduces extraneous background noise, degrading the contrast and accuracy of the final reconstructed images. In this paper, temporal focusing-based multiphoton illumination (TFMI) has the advantage of widefield multiphoton excitation with volume selective excitation. We implement the TFMI to LFM, illuminating only the volume of interest, thus significantly reducing the background. Furthermore, offering higher penetration depth in scattering tissue via multiphoton. In addition, the volume range can be varied by modulating the size of the Fourier-plane aperture of objective lens. 100 nm fluorescence beads are used to examine the lateral and axial resolution after phase space deconvolution from light field image, the experimental results show that the lateral resolution is around 1.2 μm and axial resolution is around 1.6 μm close to the focal plane. Furthermore, the mushroom body of drosophila brain which carried a genetic fluorescent marker GFP (OK-107) are used to demonstrate volumetric bioimaging capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A clear identification of the border between a brain tumor and surrounding healthy tissue during neurosurgery is essential in order to maximize tumor resection while preserving neurological function. However, tumor tissue is often difficult to differentiate from infiltrated brain during surgery. Most existing techniques have drawbacks in terms of cost, measurement time and accuracy. The fibre tracts of healthy brain white matter are composed of densely packed bundles of myelinated axons that form uniaxial linear birefringent medium with the optical axis oriented along the direction of the fibre bundle. Brain tumors, whose cells grow in a largely chaotic way, lack this anisotropy of refractive index. Therefore tumor tissue can be distinguished from of healthy white matter using polarized light. A wide-field visible wavelength imaging Mueller polarimetric system was used for the study of formalin-fixed human brain sections measured in reflection geometry. The non-linear decomposition of the Mueller matrices provided the maps of depolarization, scalar retardance and azimuth of the optical axis. A compelling correlation between the azimuth of the optical axis and the orientation of the brain fibre tracts was proven with the gold standard histology analysis. We present the results of post-processing of Mueller polarimetric images of fixed human brain sections using a combination of classical computer vision and machine learning algorithms, for the automated brain fibre tracking in the white matter tracts. Manually labelled polarimetric data was used to train a convolutional neural network to identify white matter. Within the identified white matter, surface fibre tracts could be visualized. We expect that Mueller polarimetric imaging modality combined with our ML algorithms for fibre tracking will visualize the directions of fibre tracts in imaging plane during tumor surgery, thus, allowing a neurosurgeon to orient himself, to spare essential fibre tracts and to make surgery more complete and safe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel microscopy method, called chiral structured illumination microscopy, for fast imaging fluorescent chiral domains at sub-wavelength resolution. This method combines three main techniques, namely structured illumination microscopy, fluorescence-detected circular dichroism and optical chirality engineering. By generating the moir`e effect based on the circular dichroism response of the sample, chiral SIM is able to restore the high spatial frequency information on the chiral domains and reconstruct an image with super resolution. We establish the theoretical framework of chiral SIM and present a numerical demonstration that indicates its superior resolving power over that of diffraction-limited wide-field imaging methods. We also discuss the possibility of applying nanostructures that form the superchiral near fields to boost the circular dichroism response in the proposed chiral SIM method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarization imaging has found many applications ranging from material sciences to biomedical applications to astronomy. A widely used class of polarimeters is based on Liquid Crystal Variable Retarders (LCVR). Indeed, LCVRs are ideal for imaging application: they are versatile polarization modulators with fast response times and a large aperture. However, the main drawback of such systems is their strong dependence on temperature. As a consequence, they require frequent and time-consuming calibration procedures. In this work, we propose a new design for a temperature-stable Variable Retarders cell compatible with LCVR-based polarimeter designs. We formalize a phenomenological model for the temperature dependency of LCVRs and derive theoretical expressions for the working points of the temperature stable cells. We used a heated enclosure to validate the proposed design experimentally. Stable operation of a single cell built from commercially available LCVRs is demonstrated on a wide range of temperatures (25-50°C). Two cells were then combined to obtain a Polarization State Analyzer (PSA), acting either as a standalone Stokes polarimeter or as part of a Muller polarimeter in combination with a Polarization State Generator (PSG). In both cases, excellent stability is demonstrated compared to similar LCVR based polarimeters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging birefringence at sub-micrometre resolution has demonstrated its potential as a powerful label-free method to get insight into the structure of biological tissues, both for fundamental research and for biomedical applications. However, achieving sensitive birefringence imaging in real-time is a challenge. Recently our laboratory has successfully demonstrated real-time Mueller laser-scanning microscopy based on the idea of spectrally encoded light polarization. This method implements a very fast swept-wavelength laser source in combination with passive polarization optics, enabling high-speed polarization modulation (<MHz) and thus fast polarization measurements. However, as a Jones/Mueller method, it is operating off null (far from extinction) so its sensitivity is intrinsically limited. Here, we report a new version of the spectrally encoded light polarization microscope dedicated to weak linear retardance measurements and combining the high sensitivity of a null method with the speed of the spectrally encoded light polarization method. We expect that the superior performances of this new device will open the gate for real-time imaging of very weak birefringent structures within biological samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new underwater imaging method and apparatus has been created and designed. The method is utilizing multibeam interference and is implemented in the following way. First, using mode-locked laser to shoot short light pulse into a mirrored negative dispersion device. Due to dispersion, the pulse width is broadened, that is, the multi-beams formed by pulse spectral components will create destructive interference in the device to reduce their combined intensity. Then, these multi-beams go into the water. Since their combined intensity has been reduced, the water absorption and scattering are reduced too because the water absorption and scattering are all directly proportional to the combined intensity of the multibeams. Because the water dispersion is positive, the beams with lower frequencies will travel faster in the water which is opposite to what happened in the negative dispersion device. Thus, the width of the broadened light pulse is compressed gradually in the water. If the dispersive characteristic of the mirrored negative dispersion device is designed to match that of the water reversely well, the broadened light pulse can be compressed ideally in the water at a special position. In other words, the multiple beams will create constructive interference to produce a combined intensity maximum in the water, which will form an internal light layer to illuminate the object. The theoretical calculations have proved feasibility of the method and show that the designed apparatus can increase imaging distance in clean ocean water to much more than 100m with possibility of even to 1000m.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computing, Modelling, Design: Design and Co-design
This paper deals with Single Image Depth-From-Defocus (SIDFD), a depth estimation approach based on local estimation of defocus blur. As both blur and scene are unknown in a single image, generic scene and blur models are commonly used in DFD algorithms. In contrast, we propose to directly learn image covariance using a limited set of calibration images which indeed encode both scene and blur (i.e. depth) information. Depth can then be estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. Here, we also propose a performance model based on the calculation of the Cram´er-Rao Bound with a learned scene model to predict the theoretical depth accuracy of SIDFD system. We validate our SIDFD algorithm and our performance model on an active chromatic SIDFD system dedicated to industrial inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid imaging system is a simultaneous physical arrangement of a refractive lens and a multilevel phase mask (MPM) as a diffractive optical element (DOE). The favorable properties of the hybrid setup are improved extended-depth-of-field (EDoF) imaging and low chromatic aberrations. We built a fully differentiable image formation model in order to use neural network techniques to optimize imaging. At the first stage, the design framework relies on the model-based approach with numerical simulation and end-to-end joint optimization of both MPM and imaging algorithms. In the second stage, MPM is fixed as found at the first stage, and the image processing is optimized experimentally using the CNN learning-based approach with MPM implemented by a spatial light modulator. The paper is concentrated on a comparative analysis of imaging accuracy and quality for design with various basic optical parameters: aperture size, lens focal length, and distance between MPM and sensor. We point out that the varying aperture size, lens focal length, and distance between MPM and sensor are for the first time considered for end-to-end optimization of EDoF. We numerically and experimentally compare the designs for visible wavelength interval [400-700]~nm and the following EDoF ranges: [0.5-100]~m for simulations and [0.5-1.9]~m for experimental tests. This study concerns an application of hybrid optics for compact cameras with aperture [5-9] mm and distance between MPM and sensor [3-10] mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Co-design methods started to incorporate neural networks a few years ago when deep learning showed promising results in computer vision. This requires the computation of the point spread function (PSF) of an optical system as well as its gradients with respect to the optical parameters so that they can be optimized using gradient descent. In previous works, several approaches have been proposed to obtain the PSF, most notably using paraxial optics, Fourier optics or differential ray tracers. All these models have limitations and strengths regarding their ability to compute a precise PSF and their computational cost. We propose to compare them in a simple co-design task to discuss their relevance. We will discuss the computational cost of these methods as well as their applicability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compare different methods to optimize end-to-end a hybrid optical/digital system for best and most uniform performance over the field-of-view with the Synopsys® CodeV® lens design software. We have extended the native optimization capability of this software by implementing different methods that leverage the deconvolution during the optimization process of the hybrid optical system as a whole, including simultaneously their optical and digital image processing parts. We show that the joint optimization of the lens and the processing through a true restored-image quality criterion significantly enhances the final post-processed image quality, and allows to fine-tune the residual balancing between on-axis and peripheral fields with simple weighting coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An epidural injection is one among many medical procedures used for a long time for anaesthesia and pain associated with radiculopathy. It is the most preferred method for drug delivery for sciatica patients and local anaesthesia. It is crucial to precisely assess the exact position of the needle while performing the procedure. There have been several cases of paralysis, mainly after transforaminal epidural injections. So, there is always a high scope for a device that can precisely measure the needle position in real-time during the epidural injection. Optical coherence tomography (OCT) has become an essential tool in bio-imaging. An OCT is a Michelson interferometer with one of the mirrors replaced by the sample that we want to image. The main attraction offered by the system is that it is non-invasive. We report using an OCT-based optical fibre catheter, which can be used to guide the epidural injections. The catheter is made of single-mode optical fibre, and for focusing the light beam, a ball lens is designed and fabricated on the tip of the fibre. The lens is designed to have a diameter of 250 µm and will be inserted into the needle. The fibre catheter will be connected to a custom-made SD-OCT system equipped with a high speed inhouse made spectrometer. By analysing the A-scan images from the system, we can precisely calculate the thickness of the surface in front of the catheter. This data will guide the clinician to assess the needle position precisely in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative oblique back-illumination microscopy (qOBM) is a novel microscopy technology that enables real-time, label-free quantitative phase imaging (QPI) of thick and intact tissue specimens. This approach has the potential to address a number of important biomedical challenges. In particular, qOBM could enable in-situ/in-vivo imaging of tissue during surgery for intraoperative guidance, as opposed to the technically challenging and often unsatisfactory ex-vivo approach of frozen-section-based histology. However, the greyscale phase contrast provided by qOBM differ from the colorized histological contrasts most familiar to pathologists and clinicians, limiting potential adoption in the medical field. Here, we demonstrate the use of a CycleGAN (generative adversarial network), an unsupervised deep learning framework, to transform qOBM images into virtual H&E. We trained CycleGAN models on a collection of qOBM and H&E images of excised brain tissue from a 9L gliosarcoma rat tumor model. We observed successful mode conversion of both healthy and tumor specimens, faithfully replicating features of the qOBM images in the style of traditional H&E. Some limitations were observed however, including attention-based constraints in the CycleGAN framework that occasionally allowed the model to ‘hallucinate’ features not actually present in the qOBM images used. Strategies for preventing these hallucinations, comprising both improved hardware capabilities and more stringent software constraints, will be discussed. Our results indicate that deep learning could potentially bridge the gap between qOBM and traditional histology, an outcome that could be transformative for image-guided therapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A custom high-resolution snapshot computed tomography imaging spectrometer (CTIS) capable of capturing hyperspectral images - with dimensions 456 × 471 × 417 covering the wavelength range from 438 to 740 nm - through an iterative reconstruction algorithm is tested in a real-world application. The hyperspectral cubes are reconstructed using a sparse implementation of the expectation-maximization algorithm. In the literature, CTIS has previously been limited to laboratory environments with control of external parameters, ideal imaging conditions and imaging of relatively simple objects. Therefore, knowledge about the performance, capabilities and limitations of CTIS in real-world applications (outdoor illumination) is sparse. The real-world application comprises imaging of apples in apple trees and fake plastic apples, which are visually indistinguishable to the naked eye. The results show that real apples are distinguishable from fake plastic apples based on the recon- structed spectral signature. Furthermore, an upgraded CTIS with dimensions of 277 × 278 × 430 is used to investigate the feasibility of the system through the acquisition of CTIS images in the controlled environment of the laboratory. Additionally, several shortcomings of the current system are highlighted and discussed, and improvements to circumvent the shortcomings are proposed. This work shows the capabilities and potential of CTIS in real-world applications and paves the way for future real-time snapshot hyperspectral imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional optical microscopy is one of the most commonly used method in biological imaging studies. However, intensity-based imaging techniques exhibit poor imaging contrasts when biological samples are considered. Several methods, like phase contrast microscopy or darkfield microscopy, have been proposed to overcome these limitations. A better qualitative information can then be extracted at the cost of experimental devices multiplicity. On the other hand, quantitative phase imaging techniques like Tomographic Diffraction Microscopy (TDM) make it possible to extract full 3D information about the light field emerging from the investigated sample. In this article, we propose to take benefits of TDM capabilities to simulate the behavior of conventional optical microscope, making TDM and universal microscopy imaging platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tomographic phase microscopy in cytometry environment is feasible at single cell level and without the a-priori knowledge of the cell orientation. In the present paper we demonstrate different strategies for recovery the rotation angles of single cells and clusters when rotating into microfluidic channels, thus realistically opening to the implementation of marker-free cytofluorimeter for three-dimensional imaging of biological fluids. The pioneering developed strategies allows to measure quantitatively the inner distribution of the refractive indexes inside the cell volume avoiding the use of chemical and fluorescent tags. The imaging apparatus is based on label-free Digital Holography in microscopy setup designed in transmission geometry to image 700x700μm Field of View with lateral resolution of 0.5μm. Digital Holography is perfectly suited for imaging in microchannels as it allows the numerical refocusing of sample into a three-dimensional volume. In the present paper, such imaging arrangement is combined with a high-precision pumping system connected to a microfluidic channel that allow the complete rotation of the flowing cells into the Field of View. High-speed 25Megapixel camera acquires holographic set measurements of all rotating cells that are numerically processed to obtain quantitative two-dimensional phase-contrast maps at different view angles. Accurate numerical algorithms allow to tag each phasecontrast maps with the rotation angle in the microchannel. The couples made of phase-contrast map and measured angle are given as input at tomography algorithms to obtain the refractive index distribution into the cell volume. The approach in principles works properly for any kind of biological matter subjected to rotation as already demonstrated in case of nuclei of plant cells during dehydration. Furthermore, the same approach allows to show the three-dimensional distribution of internalized nano-particles as in case of nano-graphene oxide. The most important achievement and innovation of such strategy is the high-throughput phase-contrast tomography at single cell level that opens to new diagnostic tool thanks to the possibility to have statistically relevant measurement on cell population and also for the possibility to use artificial intelligence architecture for cell identification and classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier Ptychographic Microscopy (FPM) probes biological samples from multiple directions and provides amplitude and quantitative phase-contrast imaging in label-free modality with large space-bandwidth product. FPM is suitable to analyse tissues in hospitals and analysis labs by unskilled users. However, whenever any setup misalignment occurs, the actual illumination vector does not match the nominal one, which provokes an incorrect stitching of the low-resolution Fourier spectra and generates severe phase artefacts in the FPM reconstruction, thus preventing convergence to the sample complex amplitude. Such inconsistency between the nominal and the actual illumination vectors can be also induced by the presence of liquid films or drops on the sample plane, or light scattering. Here, we show the Multi-Look FPM method, which eliminates the unwanted artefacts and obtains correct quantitative phase-contrast images in labelfree mode over a 3.3 mm2 FoV with a 0.5 μm resolution. Our results add robustness and resiliency to FPM apparatus and will allow non-expert users exploiting FPM setups without calibration processes, becoming more accessible and available by unskilled personnel in diagnostic labs and hospitals. We demonstrate the effectiveness of the Multi-Look FPM method in the case of neural tissue slides, cell layers, and marine microalgae with complex inner structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lens free inline holographic microscopy was shown to be a potent approach for many applications relying on cellular imaging. Applications requiring a large field of view at moderate resolution are the ones that are most suitable for this platform. Besides, the simplicity of the overall imaging system, which requires only a light source and a camera, positions this approach as an easily accessible one. Acquired holograms from such a system are processed to recover phase images. As an additional advantage on top of the simplicity, phase imaging enables the imaging of otherwise transparent cell samples without any need for labeling or staining. Eventually, such a system can be used for long term imaging of live cell cultures with wide field of view. Up to now, two alternatives were explored for the imaging of live cell cultures for extended duration. In one approach, a portable imaging system was placed inside a standard incubator with cell cultures on top. A stage top incubator was used on a modified microscope in the other approach. In both approaches, the cost of the system grows due to commercial systems, and the overall footprint of the system with incubator is too large to be classified as portable. Here, an integrated portable system is presented that can maintain cell cultures at desired temperature in a 3D printed enclosure while imaging them in lens free inline holographic microscopy modality. Such a system is well suited for tissue culturing and monitoring at limited resources settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plastic debris increase every day, invading the entire ecosystem, especially the marine environment. The damage caused by plastics, and even more by their fragmentation into micrometric samples, called microplastics (MPs), is becoming irrecoverable. Many techniques are used to analyse MPs, but a standardized procedure is still missing. Digital Holography (DH) has proved to be a powerful imaging tool for identifying MPs in water samples, highlighting highthroughput, label-free, high-coherent and non-invasive prowess. Besides, DH furnishes quantitative information, morphological parameters and numerical refocusing, suitable for microfluidic systems. Both physical and chemical information that characterize the optically denser object are completely enclosed in the phase contrast, measured by light waves transition. However, DH is not capable alone to be material specific and to gather polymers information. To overcome these constraints, artificial intelligence (AI) has been considered, demonstrating to be a powerful pawn for accurately identifying MPs samples. Here we identify, characterize, and classify MPs samples by means of DH, empowered by AI, providing an DH modality for a fast and high-throughput analysis of MPs at lab-on-chip scale, distinguishing them from marine diatoms. We use a machine learning (ML) approach on “holographic features” extracted from DH images for distinguishing MPs from diatoms with a well-established SVM classifier. Then, we couple DH microscopy and machine learning to a novel characterization of phase-contrast patterns based on the fractal geometry. Besides, we use a polarization-resolved DH flow cytometer to prove MPs birefringence, and to accurately discriminate between different types of MPs with fiber shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Materials with magnetic shape memory (MSM) are promising candidates for application in next generation devices, such as actuators and switching valves. They exhibit elongation and contraction in a magnetic field and allow to achieve fast switching times in the order of milliseconds while maintaining high positioning precision over millions of cycles. Studying and developing applications using these materials creates a need for fast and accurate methods for analyzing their shape and deformations. We present a technology that utilizes capabilities of two interferometric methods - digital holography (DH) and electronic speckle pattern interferometry (ESPI). While digital holography enables high-precision 3D measurement of the object surface, electronic speckle pattern interferometry provides data on high-frequency deformations with nanometer accuracy. Combining both techniques allows to obtain comprehensive information about the morphology and dynamics of samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronically demodulating indirect time-of-flight (iTOF) image sensors can be replaced by a combination of a transmission electroabsorption modulator (T-EAM) and a common image sensor. In this case the demodulation is done by the modulator in the optical domain. In this work a three-dimensional (3D) imaging system utilizing this combination is built and its performance using two identical modulators with different size is compared. Measurement results show the strong influence of the modulation bandwidths of the T-EAM, leading to depth inaccuracies of 1 and 4 cm at a distance of 1 m for modulators with bandwidths of 75 and 23 MHz, respectively. In addition, simulation results matching the measured values are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Turin Shroud: neutrons and protons produced by interaction of γ photons with the buried body, i.e. reaction 16O(γ, n)15O
in giant dipole resonance, followed by 14N(n, p)14C, could have biased radio-dating, and formed the image
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual gloss is a multidimensional perceptual attribute and its instrumental evaluation is considerably complex. Nevertheless, visual gloss plays an important role in the judgement of visual appearance of products. According to Hunter (1937), at least six perceptual attributes should be considered for a complete evaluation. Most of these attributes were initially determined with visual evaluations and subsequently with instruments using photodiode sensors that measure the amount of reflected light at multiple geometries to quantize aspects of gloss. However, partly due to the higher amount of information available to the human visual system compared to the gloss meter, there is only a weak correlation between human gloss perception and industrial gloss instruments. For this reason, these instruments are often combined with visual assessment for quality control e.g. at the end of production lines. Recently, the advancing technological developments in imaging hardware and software enabled the introduction of imaging sensors into gloss meters. This majorly increases the amount of captured information. In this study, a camera-based gloss meter design is adapted to include the measurement of contrast gloss and to study its influence on glossiness. Indeed, despite the considerable impact of contrast on gloss, as described by the contrast gloss formula developed by Leloup et al., this is not included yet in current gloss meters. The implementation of a contrast evaluation requires several additional sensor calibration procedures. Furthermore the difference in illumination levels, focus levels and viewing distance between instrumental evaluation conditions and realistic visual assessment conditions must be accounted for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visual quality inspection of test objects having a complex geometry is a challenging task for automated artificial vision systems. Even for systems where the illumination and image acquisition setups are specifically tailored with respect to the properties of the test object, captured images often show unwanted signal components, e.g., surface reflections, which complicate the detection of present material defects. One way to mitigate this problem is to have an expert define image regions by hand which are excluded from the automated defect detection. Besides being a time-consuming procedure, this also results in the system being blind at the respective regions. Another approach is based on acquiring image value statistics (e.g., mean value and standard deviation) for every pixel of an image series captured from a set of defect-free test objects. This information can then be exploited during the inspection process by comparing image values with respect to the previously calculated statistics. Pixels whose image values lie outside the distribution for the defect-free case might indicate a material defect. Unfortunately, the calculated statistics are invalidated as soon as further preprocessing steps like smoothing or edge detection are applied. The statistics would have to be recalculated by applying the respective preprocessing steps to the images of the defect-free test objects. To resolve this drawback, this contribution presents a novel approach capable of adequately updating the calculated statistics with respect to the chain of required image processing steps. This is achieved by interpreting the statistics as uncertainties and by propagating them through the single processing steps via Gaussian uncertainty propagation. The required gradients are obtained via automated differentiation of the image processing steps. The effectiveness of the proposed approach is demonstrated by means of empirical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A quantitative phase image is often obtained by refocussing or by an iterative algorithm containing refocussing steps. The required propagation distance is judged by a manual search. Autofocus algorithms attempt to estimate this propagation distance. In this paper, we report novel approaches to autofocus for quantitative phase images using neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic based optical elements are key components for many product in Augmented Reality and Virtual Reality. We describe in this work the use of pixelated micrometric holograms to fulfill the role of directive in phase reflector for self-focusing purpose. We present the optical set-up used to record these pixelated holograms as well as a set-up to realize a dynamic addressing on these holograms. First results of dynamic holograms addressing are shown and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Haze is an undesirable effect in images caused when atmospheric particles, such as water droplets, ice crystals, dust, or smoke, are lit directly or indirectly by the sun. This effect can be counteracted by image processing, to bring back the details of a hazy image. Unfortunately, the execution time is often long, which prevents deployment in some video or real-time applications. In this paper, we propose to tune several parameters of the Dark Channel Prior method (DCP) algorithm combined with the fast guided filter. We evaluate the optimization in terms of execution time, and quantify the output image quality using different image quality metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed dual-modality imaging that combines Electrical Impedance Tomography (EIT) and Optical Coherence Tomography (OCT) to profile the microstructural and physiological status of 3D cell culture and to offer complementary biological information with an optical resolution. In this paper, the design of the dual-modal sensor is detailly described and a robust algorithm developed for information fusion and image reconstruction is also illustrated. The performance of the developed imaging framework and image reconstruction algorithm is evaluated by both numerical simulation and real experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The anatomical structures of the eye possessed by vertebrates and arthropods are of two major types when considering the natural image-forming systems. Study of these vision systems gives an opportunity to understand the solution to a specific problem developed by the Nature. Also, the understanding of the eye systems in the animal world is fascinating and important in the development of bioinspired manufacturing of dioptric systems for many advanced and sophisticated instruments. The techniques used to study anatomical features of a compound eye like electron microscopies (SEM, TEM), micro computed tomography (µCT), histology etc. have limitations, making them unsuitable for in vivo studies. In vivo imaging tools especially with high resolution imaging capability can provide an insight of visual ecology in greater detail. It also reduces the cost and the amount of time required to perform experiments. Full-field optical coherence tomography (FF-OCT) is an expansion of OCT technique that uses high numerical aperture microscope objective to get highly resolved en face tomographic image of a biological sample in the in vivo condition. We report the application of an in-house developed time domain full-field optical coherence tomography (TD FF-OCT) system for depth resolved en face and three dimensional, in vivo imaging of dragonfly’s prominent compound eye.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lensless on-chip microscopy comprises a simple and compact setup in which the sample is placed close to the imaging sensor and illuminated by a coherent light source. The acquired in-line hologram carries information about the amplitude and phase image of the sample, which can be numerically reconstructed. Contrary to conventional microscopy, the reconstructed images can be numerically refocused at desired focus planes effectively providing three-dimensional information. For reliable object reconstruction, a proper focus plane must be selected, which can be done automatically using an autofocusing algorithm. The autofocusing algorithms are commonly evaluated on synthetic or experimentally acquired in-line holograms. First are usually simulated with the same numerical propagation method as used for reconstruction and are not able to simulate holograms of truly three-dimensional objects, while experimentally acquired holograms can be affected by measurement noise and model mismatch artefacts. In this paper, we propose an objective evaluation of autofocusing algorithms on in-line holograms simulated by Mie theory and T-matrix method which can simulate holograms of truly three-dimensional spherical objects distributed in various spatial positions. We evaluate and compare different autofocusing algorithms in terms of the accuracy of the estimated focus plane and computational efficiency. Finally, we present a proof-of-concept real-time implementation of the autofocusing algorithm based on the open-source PyOpenCL framework. We found that the implemented autofocusing algorithms provided the best average accuracy of 1.75 μm and required 330 μs per evaluation cycle resulting in around 20 frames per second for autofocusing a 1024×1024 hologram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many production and assembly processes in industry are subjected to particulate contamination, that could massively affect the product’s function. The detection of the particles could be a challenging task. Especially curved surfaces place high demands on the illumination system which can either be met with highly specialized setups or with a system with a high degree of flexibility. We present a cheap, fast and versatile detection system with the ability of bright and dark field illumination. Our setup offers fast and customized switching between different illumination channels adapted to the sample. The bright field illumination offers high sensitivity to scratches, whereas dark field illumination is more sensitive to particles. The system was tested on flat and curved surfaces. Since all components explicitly use standard computer interfaces, no additional hardware is needed to connect the system to a single-board computer or workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis of technological and chemical processes in difficult and hazardous conditions for humans, with the release of a large amount of energy or the formation of harmful compounds, requires automated control. The article proposes an algorithm for the automated analysis of the forms of arcing, shapes and areas of contact of electric discharges with the surface of an object. The need for automation is associated with forming large amounts of data related to the high-speed shooting of even one process. To analyze the areas of contact of an arc discharge with the surface of an object, the work uses a technique that includes the following main stages: Pre-processing of data, including the task of noise reduction, elimination of impulse bursts, contouring of objects, elimination of blur and increasing the distinguishability of contact areas; Post-processing of data, including the task of analyzing objects, excluding objects of small shapes from consideration, exploring interframe intersections, combining shapes and forming areas of intensities. At the final stage, the correspondence of the ratio of the obtained areas of electric-discharge contact with the intensity of the emitted and recorded voltage is given. On a set of test data of the processes of formation of surface layers during the electro-plasma formation of metal oxide layers fixed by a low-resolution high-speed camera, we show the results of applying the proposed method, the dependence, and shape of points of contact, and their relationship with the resulting properties of the created coatings are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, VR and AR headsets are becoming widespread. In addition to entertainment purposes, these technologies are increasingly being used in education, science, medicine, and engineering. The remote maintenance monitoring technologies make it possible to significantly expand the possibilities of using services for remote maintenance and repair complex technical systems by highly qualified specialists. However, the problems of implementing such systems in a wide range of tasks are complicated by the presence of a wide variety of solutions of this kind and the high price of such models. In this paper, we investigate a new smartphone-based augmented reality device for industrial tasks. The article describes augmented reality glasses based on a mobile phone (system "DAR"), which combines the functions of VR and AR technologies and a low cost of the final product. The proposed solution combines a helmet with a smartphone, which transmits information about the surrounding space and connects the augmented reality elements built on this image. Information about the surrounding space comes to the smartphone screen from stereo cameras equipped with autofocus. Images captured in such a system suffer from low contrast and faint color. We present a new image enhancement algorithm based on multi-scale block-rooting processing. This solution makes it possible to expand AR technology scope for remote maintenance of complex technical systems by highly qualified specialists at remote sites since using a smartphone and a DAR headset will be sufficient. Some experimental results are presented to illustrate the performance of the proposed algorithm on the real and synthesized image datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.