3D differential phase contrast (3D DPC) microscopy uses asymmetric illumination patterns and axial scanning to recover volumetric maps of refractive index. To avoid the expense of automated axial scanning, we demonstrate 3D DPC without a z-stage by hand spinning the microscope’s defocus knob to scan the object axially while updating illumination patterns on the LED-array microscope. We utilize an inverse problem optimization to retrieve the sample’s volumetric information with measurements from unknown axial positions by jointly solving for each measurement’s axial position. Finally, we explore how to optimize the LED-array illumination patterns for varying axial sampling rates.
Computational illumination microscopy has enabled imaging of a sample’s phase, spatial features beyond the diffraction limit (Fourier Ptychography), and 3D refractive index from intensity-based measurements captured on an LED array microscope. However, these methods require up to hundreds of images, limiting applications, particularly live sample imaging. Here, we demonstrate how the experimental design of a computational microscope can be optimized using data-driven methods to learn a compressed set of measurements, thereby improving the temporal resolution of the system. Specifically, we consider the image reconstruction as a physics-based network and learn the experimental design to optimize the system’s overall performance for a desired temporal resolution. Finally, we will discuss how the system’s experimental design can be learned on synthetic training data.
The goal of this work is to incorporate Convolutional Neural Networks (CNNs) into the 3D deconvolution process without training. CNNs are well suited to the problem of 2D deconvolution, however training a CNN on 3D volumes requires excessive time and impractical amounts of training data. To circumvent these problems, we use a CNN architecture as if it were a handcrafted prior, similar to the work deep image prior. Using this method, we achieve high SSIM and PSNR metrics relative to other modern techniques for deconvolving through-focus fluorescence measurements to recover a 3D volume with no training data and minimal hyperparameter tuning.
3D refractive index imaging methods usually rely on a weak-scattering approximation that does not allow for thick samples to be imaged accurately. Recent methods such as 3D Fourier ptychographic microscopy (FPM) instead use multiple-scattering models which allow for thicker objects to be imaged. In practice the illumination-side coding of 3D FPM requires redundant information and may produce inaccurate reconstructions for thick samples. Here, we propose augmenting 3D FPM with detection-side coding using a spatial light modulator (SLM) and optimize the SLM coding strategy with physics-based machine learned pupil coding designs that are optimized for 3D reconstructions. We compare our learned designs to random-, defocus-, Zernike aberrations-based pupil codes in simulated and experimental results.
Fourier Ptychographic Microscopy (FPM) and Differential Phase Contrast (DPC) are quantitative phase imaging (QPI) methods that recover the complex transmittance function of a sample through coded illumination measurements and phase retrieval optimization. The successes of these methods rely upon acquiring several or possibly hundreds of illumination-encoded measurements. The multi-shot nature of such methods limits their temporal resolution. Similar to motion-induced blur during a long photographic exposure, motion occurring during these acquisitions causes spatial distortion and errors in the reconstructed phase, which inhibits these methods' ability to image fast moving live samples.
Here we present a novel approach to correct for motion during QPI capture that relies on motion navigation to register measurements together prior to phase retrieval. The different illumination patterns required for QPI cause the measurements to have a different contrasts. This makes it difficult to use standard registration approaches to estimate complex sample motion directly from the measurements. Instead, we use a color-multiplexed navigator signal (red) that is comprised of a constant illumination pattern and leverage a color camera to separate it from the primary QPI information (green). The reliable motion estimate allows measurements to be shared across time points through image registration. This enables a full set of measurements for a phase retrieval problem to be solved at each time point. We demonstrate proof-of-concept experimental results in which blurring due to live sample motion (swimming Zebra fish, cell motion, and organelle movement) is reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.