Optical diffraction tomography (ODT) has demonstrated its potential for revealing subcellular structures and quantitative compositions in living cells without chemical staining. Recently, we developed a deep-learning based algorithm to reconstruct the 3D refractive index (RI) maps of cells using a single raw interferogram, measured from an angle-multiplexed ODT system. Using this system, we demonstrated a high throughput 3D image cytometry method, in which a microfluidic chip for controlling cell flow is integrated in the ODT system. By flowing the cells in the chip and minimizing the camera exposure time, we can achieve 3D imaging of over 6,000 cells per second.
Optical diffraction tomography (ODT) is a powerful label-free three-dimensional (3D) quantitative imaging technique. However, current ODT modalities require around 50 different illumination angles to reconstruct the 3D refraction index (RI) map, which limits its imaging speed and prohibit it from further applications. Here we propose a deep-learning approach to reduce the number of illumination angles and improve the imaging speed of ODT. With 3D Unet architecture and large training data of different species of cells, we can decrease the number of illumination angles from 49 to 5 with similar reconstruction performance, which empowers ODT the capability to reveal high-speed biological dynamics.
I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) for image formation. The DNNs are trained from examples consisting of pairs of known objects and their corresponding raw images drawn from databases such as ImageNet, Faces-LFW and MNIST. The raw images are converted to complex amplitude maps and displayed on a Spatial Light Modulator (SLM.) After training, the DNNs are capable of recovering unknown objects, i.e. objects not previously included in the training sets, from the raw images in several scenarios: (1) phase objects retrieved from intensity after lensless propagation; (2) phase objects retrieved from intensity after lensless propagation at extremely low photon counts; and (3) amplitude objects retrieved from intensity in-focus after propagation through a strong scatterer. Recovery is robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set. In the talk I will discuss in more detail various methods to incorporate the physics into DNN training, and how DNN architecture and “hyper-parameters” (i.e., depth, number of units in each depth, presence or absence of skip connections, etc.) influence the quality of image recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.