We demonstrate a simple yet highly effective uncertainty quantification method for neural networks solving inverse imaging problems. We built forward-backward cycles utilizing the physical forward model and the trained network, derived the relationship of cycle consistency with respect to the robustness, uncertainty and bias of network inference, and obtained uncertainty estimators through regression analysis. An XGBoost classifier based on the uncertainty estimators was trained for out-of-distribution detection using artificial noise-injected images, and it successfully generalized to unseen real-world distribution shifts. Our method was validated on out-of-distribution detection in image deblurring and image super-resolution tasks, outperforming other deep neural network-based models.
We introduce GedankenNet, a self-supervised learning model for hologram reconstruction. During its training, GedankenNet leveraged a physics-consistency loss informed by the physical forward model of the imaging process, and simulated holograms generated from artificial random images with no correspondence to real-world samples. After this experimental-data-free training based on “Gedanken Experiments”, GedankenNet successfully generalized to experimental holograms on its first exposure to real-world experimental data, reconstructing complex fields of various samples. This self-supervised learning framework based on a physics-consistency loss and Gedanken experiments represents a significant step toward developing generalizable, robust and physics-driven AI models in computational microscopy and imaging.
We demonstrate a deep learning-based framework, called Fourier Imager Network (FIN), which achieves unparalleled generalization in end-to-end phase-recovery and hologram reconstruction. We used Fourier transform modules in FIN architecture, which process the spatial frequencies of the input images in a global receptive field and bring strong regularization and robustness to the hologram reconstruction task. We validated FIN by training it on human lung tissue samples and blindly testing it on human prostate, salivary gland, and Pap smear samples. FIN exhibits superior internal and external generalization compared with existing hologram reconstruction models, also achieving a ~50-fold acceleration in image inference speed.
We report a recurrent neural network (RNN)-based cross-modality image inference framework, termed Recurrent-MZ+, that explicitly incorporates two or three 2D fluorescence images, acquired at different axial planes, to rapidly reconstruct fluorescence images at arbitrary axial positions within the sample volume, matching the 3D image of the same sample acquired with a confocal scanning microscope. We demonstrated the efficacy of Recurrent-MZ+ on transgenic C. Elegans samples; using 3 wide-field fluorescence images as input, the reconstructed sample volume by Recurrent-MZ+ mitigates the deformations caused by the anisotropic point-spread-function of wide-field microscopy, and matches the ground truth confocal image stack of the sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.