We present a super-resolution framework for coherent imaging systems using a generative adversarial network. This framework requires a single low-resolution input image, and in a single feed-forward step it performs resolution enhancement. To validate its efficacy, both a lensfree holographic imaging system with a pixel-limited resolution and a lens-based holographic imaging system with diffraction-limited resolution were used. We demonstrated that for both the pixel-limited and diffraction-limited coherent imaging systems, our method was able to effectively enhance the image resolution of the tested biological samples. This data-driven super resolution framework is broadly applicable to various coherent imaging systems.
KEYWORDS: Holography, Pathology, Color imaging, Imaging systems, Microscopy, Tissues, 3D image reconstruction, Digital holography, RGB color model, Image processing
We present a deep learning-based, high-throughput, accurate colorization framework for holographic imaging systems. Using a conditional generative adversarial network (GAN), this method can be used to remove the missing-phase-related spatial artifacts using a single hologram. When compared to the absorbance spectrum estimation method, which is the current state-of-the art method used to perform color holographic reconstruction, this framework is able to achieve a similar performance while requiring 4-fold fewer input images and 8-fold less imaging and processing time. The presented method can effectively increase the throughput for color holographic microscopy, providing the possibility for histopathology in resource limited environment.
We demonstrate a deep learning-based technique which digitally stains label-free tissue sections imaged by a holographic microscope. Our trained deep neural network can use quantitative phase microscopy images to generate images equivalent to the same field of view of the specimen, once stained and imaged by a brightfield microscope. We prove the efficacy of this technique by implementing it with different tissue-stain combinations involving human skin, kidney, and liver tissue, stained with Hematoxylin and Eosin, Jones’ stain, and Masson’s trichrome stain, respectively, generating images with equivalent quality to the brightfield microscopy images of the histochemically stained corresponding specimen.
We report a generative adversarial network (GAN)-based framework to super-resolve both pixel-limited and diffraction-limited images, acquired by coherent microscopy. We experimentally demonstrate a resolution enhancement factor of 2-6× for a pixel-limited imaging system and 2.5× for a diffraction-limited imaging system using lung tissue sections and Papanicolaou (Pap) smear slides. The efficacy of the technique is proven both quantitatively and qualitatively by a direct visual comparison between the network’s output images and the corresponding high-resolution images. Using this data driven technique, the resolution of coherent microscopy can be improved to substantially increase the imaging throughput.
We report a deep learning-based colorization framework for holographic microscopy, and demonstrate its efficacy by imaging histopathology slides (Masson’s trichrome-stained lung and H&E-stained prostate tissue). Using a generative adversarial network, this framework is trained to eliminate the missing-phase-related artifacts. To obtain accurate color information, the pathology slides were imaged under multiplexed illumination at three wavelengths, and the deep network learns to demultiplex and project the holographic images from the three color channels into the RGB color-space, achieving high color-fidelity. Our method dramatically simplifies the data acquisition and shortens the processing time, which is important for e.g., digital pathology in resource-limited-settings.
We present a cross-modality super-resolution microscopy method based on the generative adversarial network (GAN) framework. Using a trained convolutional neural network, our method takes a low-resolution image acquired with one microscopic imaging modality, and super-resolves it to match the resolution of the image of the same sample captured with another higher resolution microscopy modality. This cross-modality super-resolution method is purely data-driven, i.e., it does not rely on any knowledge of the image formation model, or the point-spread-function. First, we demonstrated the success of our method by super-resolving wide-field fluorescence microscopy images captured with a low-numerical aperture (NA=0.4) objective to match the resolution of images captured with a higher NA objective (NA=0.75). Next, we applied our method to confocal microscopy to super-resolve closely spaced nano-particles and Histone3 sites within HeLa cell nuclei, matching the resolution of stimulated emission depletion (STED) microscopy images of the same samples. Our method was also verified by super-resolving the diffraction-limited total internal reflection fluorescence (TIRF) microscopy images, matching the resolution of TIRF-SIM (structured illumination microscopy) images of the same samples, which revealed endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo. The super-resolved object features in the network output show strong agreement with the ground truth SIM reconstructions, which were synthesized using 9 diffraction-limited TIRF images, each with structured illumination. Other than resolution enhancement, our method also offers an extended depth-of-field and improved signal-to-noise ratio (SNR) in the network inferred images compared against the corresponding ground truth images.
KEYWORDS: Microscopy, Super resolution, Luminescence, Image processing, Image resolution, Confocal microscopy, Neural networks, Gallium nitride, Diffraction, Signal to noise ratio
We present a deep learning-based framework for super-resolution image transformations across multiple fluorescence microscopy modalities. By training a neural network using a generative adversarial network (GAN), a single low-resolution image is transformed into a high-resolution image that surpasses the diffraction limit. The deep network’s output also demonstrates improved signal-to-noise ratio and extended depth-of-field. This framework is solely data-driven which means that it does not rely on any physical models of the imaging formation process, and instead learns a statistical transformation from the training image datasets. The inference process is non-iterative and does not require sweeping over parameters to achieve optimal results, in contrast to state-of-the-art deconvolution methods. The success of this framework is demonstrated by super-resolving wide-field images captured with low-numerical aperture objective-lenses to match the resolution of images captured with high-numerical aperture objectives. In another example, we demonstrate the transformation of confocal microscopy images into images that match the performance of stimulated emission depletion (STED) microscopy, by super-resolving the distributions of Histone 3 sites within cell nuclei. We also applied this framework to total-internal-reflection fluorescence (TIRF) microscopy and super-resolved TIRF images to match the resolution of TIRF-based structured illumination microscopy (TIRF-SIM). Our super-resolved TIRF images/movies reveal endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo, providing a very good match to TIRF-SIM images/movies of the same samples. Our experimental results demonstrate that the presented data-driven super resolution approach generalizes to new types of images and super-resolves objects that were not present in the training stage.
To digitally decode phase and amplitude images of a sample from its hologram, auto-focusing and phase recovery steps are required, which are in general challenging to compute. Here, we demonstrate fast and robust autofocusing and phase recovery that are simultaneously performed using a deep convolutional neural network (CNN). This CNN is trained with pairs of randomly de-focused back-propagated holograms and their corresponding in-focus phase recovered images (used as ground truth). After its training, the CNN takes a single back-propagated hologram, and outputs an extended depth-of-field (DOF) complex-valued image, where all the objects or points-of-interest within the sample volume are autofocused and phase-recovered all in parallel. Compared to iterative image reconstruction or a CNN trained using only in-focus images, this new approach achieves >25-fold increase in image DOF and eliminates the need to autofocus individual points within the sample volume, thus improving the complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points within the sample volume, and m represents the autofocusing search space. We demonstrated the success of this approach by imaging various samples, including aerosols and human breast tissue sections. Our results highlight some unique capabilities of deep-learning based image reconstruction methods that are powered by data.
Mobile-phone based microscopy often uses 3D-printed opto-mechanical designs and inexpensive optical components that are not optimized for microscopic imaging of specimen. For example, the illumination source is often a battery-powered LED, which can create spectral distortions in the acquired image. Mechanical misalignments of the optical components and the sample holder as well as inexpensive lenses lead to spatial distortions at the microscale. Furthermore, mobile-phones are equipped with CMOS image sensors with a pixel size of ~1-2 µm, which results in an inferior signal-to-noise ratio compared to benchtop microscopes, which are typically equipped with much larger pixels, e.g., ~5-10 µm.
Here, we demonstrate a supervised learning framework, based on a deep convolutional neural network for substantial enhancement of a smartphone microscope image, by eliminating spectral aberrations, increasing the signal-to-noise ratio and improving the spatial resolution of the acquired images. Once trained, the deep neural network is fixed, and it rapidly outputs an image, matching the quality of a benchtop microscope image, in a feed-forward, non-iterative manner, without the need for any modeling of the aberrations in the mobile imaging system. This framework is demonstrated using pathology slides of thin tissues sections and blood smears, validating its superior performance even using highly-compressed images, suitable especially for telemedicine applications with restricted bandwidth and storage requirements. This deep learning-powered approach can be broadly applicable to various mobile microscopy systems that can be used for point-of-care medicine and global health related applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.