Open Access
25 July 2024 Multiplane quantitative phase imaging using a wavelength-multiplexed diffractive optical processor
Che-Yung Shen, Jingxi Li, Yuhang Li, Tianyi Gan, Langxing Bai, Mona Jarrahi, Aydogan Ozcan
Author Affiliations +
Abstract

Quantitative phase imaging (QPI) is a label-free technique that provides optical path length information for transparent specimens, finding utility in biology, materials science, and engineering. Here, we present QPI of a three-dimensional (3D) stack of phase-only objects using a wavelength-multiplexed diffractive optical processor. Utilizing multiple spatially engineered diffractive layers trained through deep learning, this diffractive processor can transform the phase distributions of multiple two-dimensional objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. These wavelength-multiplexed patterns are projected onto a single field of view at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor. Based on numerical simulations, we show that our diffractive processor could simultaneously achieve all-optical QPI across several distinct axial planes at the input by scanning the illumination wavelength. A proof-of-concept experiment with a 3D-fabricated diffractive processor further validates our approach, showcasing successful imaging of two distinct phase objects at different axial positions by scanning the illumination wavelength in the terahertz spectrum. Diffractive network-based multiplane QPI designs can open up new avenues for compact on-chip phase imaging and sensing devices.

1.

Introduction

Quantitative phase imaging (QPI) stands as a powerful label-free technique capable of revealing variations in optical path length caused by weakly scattering samples.13 QPI enables the generation of high-contrast images of transparent specimens, which are difficult to observe using conventional bright-field microscopy. In recent years, various QPI methodologies have been established, including, e.g., off-axis imaging methods,4,5 phase-shifting methods,6,7 and common-path QPI techniques.8,9 These methods have been instrumental in conducting precise measurements of various cellular dynamics and metabolic activities covering applications in, e.g., cell biology,10,11 pathology,1214 and biophysics,15 such as the monitoring of real-time cell growth and behavior,16,17 cancer detection,18,19 pathogen sensing,20,21 and the investigation of subcellular structures and processes.22 In addition, QPI also finds applications in materials science and nanotechnology, which include characterizing thin films, nanoparticles, and fibrous materials, revealing their unique optical and physical attributes.2325

Predominantly, QPI systems are employed to extract quantitative phase information within a two-dimensional (2D) plane by utilizing a monochromatic light source and sensor array. Given that standard optoelectronic sensors are limited to detecting only the intensity of light, advanced approaches utilizing customized illumination schemes and interferometric techniques,2628 combined with digital postprocessing and reconstruction algorithms, are employed to convert the intensity signals into quantitative phase images. Building on the foundations of 2D QPI approaches, tomographic QPI and optical diffraction tomography methods have also expanded QPI’s capabilities to encompass volumetric imaging.2932 These techniques typically capture holographic images from multiple illumination angles, which allows for the digital reconstruction of the refractive index distribution across the entire three-dimensional (3D) volume of the sample.

The digital postprocessing techniques in QPI and phase tomography systems have witnessed a paradigm shift, primarily attributed to the recent advancements in the field of artificial intelligence. Specifically, the efficiency of feed-forward neural networks utilizing the parallel processing power of graphics processing units (GPUs) has markedly increased the speed and throughput of image reconstruction in QPI systems.3335 These deep-learning-based approaches facilitated solutions to various complex tasks of QPI, such as segmentation and classification,3638 as well as inverse problems including phase retrieval,33,35,3944 aberration correction,45,46 depth-of-field extension,47,48 and cross-modality image transformations.14,49 Additionally, deep-learning-based techniques have also been used to enhance 3D QPI systems by improving the accuracy and resolution of 3D refractive index reconstructions, utilizing methods such as physical approximant-guided learning,50 recurrent neural networks,51 neural radiance fields,52 alongside the reduction of coherent noise through generative adversarial networks.53 However, the complexity of digital neural networks employed in these reconstruction techniques requires substantial computational resources, leading to lower imaging frame rates and increased hardware costs and computing power. These challenges become further intensified in 3D QPI systems due to the necessity of processing a larger set of interferometric images for 3D reconstructions.

Here, we introduce an all-optical, wavelength-multiplexed QPI approach that utilizes diffractive processing of coherent light to obtain the quantitative phase distributions of multiple phase objects distributed at varying axial depths. As illustrated in Fig. 1, our approach employs a diffractive optical processor that is composed of spatially engineered dielectric diffractive layers, optimized collectively via deep learning.5464 Following the deep-learning-based design phase, these diffractive elements are physically fabricated to perform task-specific modulation of the incoming optical waves, converting the phase profile of each of the phase-only objects located at different axial planes into a distinct intensity distribution at a specific wavelength within its output field of view (FOV). These wavelength-multiplexed intensity distributions can then be recorded, either simultaneously with a multicolor image sensor equipped with a color filter array or sequentially using a monochrome detector by scanning the illumination wavelength to directly reveal the object phase information through intensity recording at the corresponding wavelength.

Fig. 1

Schematic and working principle of multiplane QPI using a wavelength-multiplexed diffractive processor. Illustration of a wavelength-multiplexed diffractive multiplane QPI processor. The diffractive QPI processor is composed of K diffractive layers, which are jointly optimized using deep learning to simultaneously perform phase-to-intensity transformations for M phase-only objects that are successively positioned along the axial direction (z), while also routing QPI signals of these objects to the designated wavelength channels at the same output FOV.

AP_6_5_056003_f001.png

Based on this framework, we conducted analyses through numerical simulations and proof-of-concept experiments. Initially, we examined how the overlap of input objects at different axial positions affects the quality of the diffractive output images and all-optical quantitative phase information retrieval. Our results demonstrated that this diffractive QPI framework could achieve near-perfect QPI for phase objects without spatial overlap along the optical axis. Furthermore, even when the input objects are entirely overlapping along the axial direction, our diffractive processor could effectively reconstruct the quantitative phase information of each input plane with high fidelity and minimal cross talk among the imaging channels. Beyond numerical analyses, we also experimentally validated our approach by designing and fabricating a diffractive multiplane QPI processor operating at the terahertz spectrum. Our experimental results closely aligned with the numerical simulations, confirming the practical feasibility of diffractive processors in retrieving the quantitative phase information of specimens across different input planes.

The presented diffractive multiplane QPI design incorporates wavelength multiplexing and passive optical elements, enabling the rapid capture of quantitative phase images of specimens across multiple axial planes. This system’s notable compactness, with an axial dimension of <60 mean wavelengths (λm) of the operational spectral band, coupled with its all-optical phase recovery capability, sets it apart as a competitive analog alternative to traditional digital QPI methods. Additionally, the scalable nature of our design allows its adaptation to different parts of the electromagnetic spectrum by scaling the feature size of each diffractive layer proportional to the illumination wavelength of interest. Our presented framework paves the way for the development of new phase-imaging solutions that can be integrated with focal plane arrays operating at various wavelengths to enable efficient, on-chip imaging and sensing devices, which can be especially valuable for applications in biomedical imaging/sensing, materials science, and environmental analysis, among others.

2.

Results

2.1.

Design of a Wavelength-Multiplexed Diffractive Processor for Multiplane QPI

Figure 1 presents a diagram of our diffractive multiplane QPI design that is based on wavelength multiplexing. In this setup, multiple transparent samples, which are axially separated, are illuminated by a broadband spatially coherent light. This broadband illumination can be regarded as a combination of plane waves at distinct wavelengths {λ1,λ2,,λM}, organized in order from the longest to the shortest wavelength. Here, M represents the number of spectral channels as well as the number of phase objects/input planes, as each wavelength channel is uniquely assigned to a specific input plane. The illumination fields, denoted as sw (w{1,2,,M}), propagate through multiple phase-only transmissive objects, each exhibiting a unique phase profile {Ψw} at the corresponding input plane Pw. As the illumination light encounters the sample at each plane, it undergoes a phase modulation of ejΨw, resulting in multispectral optical fields {iw} at the input aperture of the diffractive processor. The wavelength-multiplexed QPI diffractive processor consists of several modulation layers constructed by dielectric materials, where each layer is embedded with spatially designed diffractive features that have a lateral size of λM/2 with a trainable/optimizable thickness, covering a phase modulation range of 0 to 2π for all the illumination wavelengths. These diffractive layers, along with the input and output planes, are interconnected through optical diffraction in free space (air).

The complex fields iw, resulting from the stacked input planes along the axial (z) direction, are modulated by the diffractive optical processor to yield output fields {ow}, i.e., ow=D{iw}. The intensity variations of these output fields are then captured by a monochrome image sensor, which sequentially records the QPI signals across the illumination wavelengths. The resulting optical intensity measurements at each illumination wavelength, noted as Dw, can be expressed as

Eq. (1)

Dw=|ow|2.

Considering that the optical intensity Dw recorded by the sensor is influenced by both the power of the illumination and the output diffraction efficiency, we used a straightforward normalization approach60,65 to counteract potential fluctuations caused by power variations and achieve consistent QPI performance. This involves dividing the output measurements (Dw) into two zones: an output signal area S and a reference signal area R. Here, R is designated as a one-pixel wide border surrounding the edges of Dw. This border is further segmented into M subsections, each labeled as Rw (w{1,2,,M}). A given Rw acts as a reference signal (Refw) for the wavelength channel λw, i.e.,

Eq. (2)

Refw=1N(Rw)(x,y)RwDw(x,y),
where N(Rw) denotes the total number of image sensor pixels located within Rw. Finally, the output quantitative phase image {Φw} of the wavelength multiplexed diffractive processor can be obtained through a simple normalization step,

Eq. (3)

Φw=DwRefw.

Once the training of our diffractive multiplane QPI processor successfully converges, all the output quantitative phase images Φw obtained at different wavelengths λw are expected to approximate the phase profiles of the input objects Ψw(λw), which can be written as

Eq. (4)

ΦwΦw(GT)=Ψw(λw),
where the ground-truth phase images Φw(GT) are defined, without loss of generality, as the object phase distributions Ψw(λw) at the corresponding wavelength λw. Based on the above formulation, our diffractive multiplane QPI processor is optimized to act as an all-optical transformer that simultaneously performs two tasks:

  • 1. A space-to-spectrum transformation that encodes spatial information of input objects at different axial positions into different spectral channels.

  • 2. A phase-to-intensity transformation that converts phase information of input objects into intensity distributions within the output FOV. This approach facilitates the reconstruction of multiplane quantitative phase information using solely a single output FOV.

To optimize/train our diffractive multiplane QPI processor, we compiled a training data set of 110,000 images containing 55,000 handwritten images and 55,000 custom-designed grating/fringe-like patterns.66 During training, to form each multiplane object, M images were randomly chosen from these 110,000 training images without replacement and encoded into the phase channels (Ψw) of the M object planes. For this phase encoding, we adopted an assumption that all the object planes are composed of the same material and an identical range of material thickness variations. This assumption ensures that these different object planes, regardless of their individual axial positions, induce a similar magnitude of phase modulations on the incoming complex fields, thereby better mirroring the real-world scenarios encountered in multiplane imaging systems. Based on this assumption, we chose to have the thickness profile h(x,y) of each phase-only object plane to be confined within the same dynamic range of [0, HtrλM], where Htr stands for the thickness range parameter used during the training, defined based on the shortest wavelength λM. Following this notation, for the w’th object plane, the maximum phase modulation φtr,w of the incoming field at wavelength λw can be written as

Eq. (5)

φtr,w=2πλw(no(λw)1)HtrλM,
where no(λw) denotes the refractive index of the object material at λw. Accordingly, we also define a phase contrast parameter αtr,w=φtr,wπ to represent the maximum phase contrast of objects at wavelength λw, i.e.,

Eq. (6)

αtr,w=2λw(no(λw)1)HtrλM.

As a result, the phase modulation values in each object plane are confined to a range of (0, αtr,wπ). Without loss of generality, in our numerical analyses, we chose Htr=0.6 and a constant material refractive index (no) of 1.5 for all λw. As a result, the phase contrast parameter values αtr,w vary according to the operational wavelength, peaking at the shortest wavelength λM, where αtr,M=Htr=0.6. Error backpropagation and stochastic gradient descent were employed to optimize the thickness of the diffractive layers by minimizing a custom loss function L defined based on the mean-squared error (MSE) between the diffractive output quantitative phase images and their ground truth across all the wavelength channels, i.e., L=1Nww=1NwMSE(Φw(GT),Φw). More information about the training process is provided in the Appendix.

To numerically demonstrate the feasibility of our diffractive system, we devised several diffractive multiplane QPI processors, focusing on the impact of input object lateral overlap—where the FOVs of the input objects located at different axial planes overlap in the x and y directions. The occurrence of lateral overlap, resulting in nonuniform illumination, can deteriorate the quality of QPI reconstructions. To explore the dynamics between adjacent input phase objects during image reconstruction and assess our design’s capability of handling laterally overlapping objects at different axial planes, we adapted our training models to various assumptions about the lateral separation between different axial planes. These five input phase objects were uniformly distributed on the circumference of a circle with a radius of r from the center, as shown in Fig. S1 in the Supplementary Material. A maximum lateral separation distance R was set as 94.5λm, ensuring that the input FOVs are not distributed out of the boundary of a diffractive layer. Building on this, we developed and trained six distinct diffractive designs by adjusting the lateral separation distance (r) of the input planes across various values spanning {0,0.2R,0.4R,0.6R,0.8R,R}, as illustrated in Fig. 2(a). These different configurations of input object arrangements, which cover the condition of a complete spatial overlap (r=0) of objects to a complete lateral separation (r=R), enabled us to investigate the impact of r on the system’s QPI performance. Apart from the varying input lateral separations, these diffractive multiplane QPI designs share identical input specifications, featuring the same number of input planes M=5 with Nx(Ψ)×Ny(Ψ)=14×14. All the diffractive designs are composed of 10 diffractive layers, where each diffractive layer has 600×600 trainable diffractive features. The entire diffractive volume spans an axial length of 56.2λm and a lateral size of 262.5λm, forming a compact system that can be monolithically integrated with a complementary metal–oxide–semiconductor image sensor. At the output plane of these diffractive designs, a monochrome image sensor with a pixel size of 5.2λm×5.2λm is assumed. A unit magnification is selected between the object/input plane and the monochrome output/sensor plane, resulting in the same size of the output signal region S as the input FOV for each axial plane. After their deep-learning-based optimization, the thickness profiles of the diffractive layers for each of the six designs are depicted in Fig. S2 in the Supplementary Material.

Fig. 2

The lateral separation settings of the input objects and the blind testing results of the diffractive multiplane QPI processors. (a) Input volume visualization for six different diffractive designs under different input lateral separation distances (r) spanning {0,0.2R,0.4R,0.6R,0.8R,R}. The color map represents the input object distribution in the axial range. (b) Examples of the blind testing results for multiplane QPI using six different diffractive processors under different input lateral separation distances (r).

AP_6_5_056003_f002.png

2.2.

Performance Analysis of Wavelength-Multiplexed Diffractive Processors for Multiplane QPI

After the training stage, we first conducted blind testing of the resulting diffractive processor designs through numerical simulations. To evaluate the multiplane QPI performance of these designs, we constructed a test set comprising 5000 phase-only objects that were never used in the training process. These objects were synthesized by randomly selecting images from the MNIST data set and encoding them into the phase channels of the wth input object with a dynamic phase range of [0,αtest,wπ]. Mirroring the approach used during the training, the phase ranges in the testing were derived from a thickness range of [0,HtestλM], consistent across the M input planes, where Htest stands for the testing thickness range parameter. The corresponding diffractive QPI output examples of the blind testing results are visualized in Fig. 2(b). Here, the Pearson correlation coefficient (PCC) was utilized to quantify the performance of these diffractive processor designs.

From the observation of the output examples with Htest=Htr=0.6 shown in Fig. 2(b), it is evident that a large lateral separation distance (r=R) among different axial planes ensures a decent reconstruction of inputs, yielding high-fidelity output images. Conversely, a smaller lateral separation distance, such as r=0.4R, results in diminished image contrast and the introduction of some imaging artifacts. We noted a consistent degradation in the image quality as the input lateral separation distance r was reduced from R to 0. This decrease in the QPI performance can be attributed to two main factors.

  • 1. Unknown sample-induced nonuniform illumination from the neighboring input planes in front of the target axial plane, and

  • 2. phase disturbance when propagating through the axially stacked input planes after the target planes.

When the testing thickness range is larger than the training thickness range, i.e., Htest=1>Htr, the output QPI results of r=0 were found to be degraded. These output images can hardly be recognized because of stronger phase perturbations caused by the larger phase contrast at each object plane. On the contrary, the output QPI measurements of r=R still present a good image fidelity at the output of the wavelength multiplexed diffractive QPI processor for Htest=1>Htr. These results highlight the diffractive design’s ability to process and image phase-only objects with a larger thickness and higher phase contrast beyond what was encountered during the training phase, i.e., Htest>Htr.

We also evaluated the resulting PCC values in Fig. 3, which reflects the examples shown in Fig. 2(b). As revealed in Fig. 3(a), the design with complete lateral separation of inputs (r=R) achieved high output PCC scores across all the imaging channels when Htr=Htr=0.6, reaching an average PCC value of 0.993±0.001, corroborating the observations from visual inspections. When the input phase objects were completely laterally overlapping (r = 0), the output PCC values dropped to 0.884±0.016, whereas the reconstructed digit images could still be discernible. When Htest increased to 1, as shown in Fig. 3(b), the performance of the design with complete lateral separation of inputs (r=R) remains at a high level, showing an average PCC value of 0.992±0.001. However, when it comes to the completely overlapping input objects (r=0), the PCC scores reduced to 0.795±0.075. The PCC values quantified for the individual objects also showed that the axial planes closer to the front in the spatial sequence exhibit better imaging performance, revealing consistency with the previously shown output images.

Fig. 3

Impact of the input lateral separation and the input object thickness on multiplane QPI performance. (a) PCC values of the resulting multiplane QPI measurements with Htest=0.6 under different input lateral separation distances (r) spanning {0, 0.4R, R}. (b) The same as (a), except for Htest=1. (c) Average PCC values of the resulting multiplane QPI measurements as a function of Htest. These six curves refer to the blind testing performances of the six diffractive processors trained under different input lateral separation distances (r).

AP_6_5_056003_f003.png

To further explore the impact of varying Htest on the QPI performance, we extended our analysis across an array of Htest values {0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8}, all tested against the same diffractive QPI model trained with Htr=0.6, as shown in Fig. 3(c). It was found that the diffractive output QPI performance peaked at Htest=0.4, with output PCC=0.996±0.001 for r=R, and PCC=0.917±0.021 for r=0. Below this peak, when Htest=0.2, a decrease in PCC was evident, demonstrating the challenge of resolving significantly lower phase contrast objects. In scenarios where the test thickness exceeded the training range (Htest>Htr), our designs demonstrated some decrease in performance, especially as Htest approached 1.8, where the PCC for r=R drops to 0.785±0.139, and PCC for r=0 drops to 0.646±0.064. This decline can be attributed primarily to two factors. First, significant phase contrast deviations between the training and testing expose the diffractive processor to unseen contrast levels, presenting generalization challenges. Second, the inherently linear nature of our diffractive processor, except for the intensity measurements at the output plane, faces approximation challenges under larger input phase contrast values due to the increased contributions of the nonlinear terms in the phase-to-intensity transformation task. Overall, our diffractive processor designs present decent generalization to various thicknesses and phase contrast values, very well covering Htest1 by using a fixed training thickness range parameter Htr=0.6 in the training stage.

To shed more light on the impact of lateral separation, we conducted additional analysis examining output PCC values as a function of the input lateral separation distance (r). As shown in Fig. S3a in the Supplementary Material, the three curves correspond to different thickness range parameters Htest, with values of {0.6, 1, 1.4}. A consistent trend of improved image quality with increasing r was observed for all three curves, indicating that a reduced overlap between objects leads to a higher image fidelity for the output QPI reconstruction. Additionally, to quantify the phase reconstruction accuracy, we measured the phase mean absolute error (MAE) of our diffractive outputs. As shown in Fig. S3b in the Supplementary Material, the phase error for Htest=Htr is consistently lower compared to Htest>Htr. Specifically, at r=R, the phase error was 4.1% for Htest=Htr=0.6, and it increased to 4.9% and 7.6% for Htest=1 and 1.4, respectively. Furthermore, the phase error gradually decreased with increased lateral separation r. For example, for Htest=0.6, the phase error reduces from 12.4% at r=0 to 4.1% at r=R. These findings demonstrate that our diffractive design not only achieves low phase-error values, but also maintains acceptable phase-imaging performance, even in scenarios where objects overlap laterally.

2.3.

Impact of Axial Separation of Input Object Planes on the Multiplane QPI Performance

Beyond the lateral arrangement of the input phase objects, the axial distance separating these input planes is another crucial factor that influences the wavelength-multiplexed QPI performance of our diffractive processors. To investigate this, we expanded our analysis of the output QPI performance by changing the input axial separation distance (Z) across {128λm,64λm,32λm,16λm}, as shown in Fig. 4. Here, the testing thickness range parameter Htest was fixed at 0.6. Figure 4(a) reveals that, when the axial distance decreases, the PCC values of the laterally overlapping phase inputs (r=0) show a drop from 0.884±0.016 at Z=128λm to 0.802±0.048 at Z=16λm. This decrease is expected due to the limited axial resolution of the diffractive QPI processor, leading to a degraded multiplane QPI performance for smaller Z distances. The output visualizations in Fig. 4(b) corroborate these findings, displaying a noticeable decrease in image fidelity for multiplane QPI as Z decreases. Conversely, in scenarios with laterally separated inputs (r=0.8R), the PCC values remained consistently high (around 0.993) even when the axial distance Z was reduced to 16λm. When the input phase objects were partially overlapping (e.g., for r=0.2R or 0.4R), the PCC values remained stable when Z decreased. This suggests that the diffractive processor maintains its effectiveness in phase reconstruction with laterally separated input phase objects, regardless of the axial distance, Z. The output examples with varying distances further reinforce this conclusion, showing high-quality reconstructions across different axial separations. These observations underscore the diffractive processor’s capability in multiplexed imaging of phase objects, especially when the inputs are not laterally overlapping.

Fig. 4

Impact of the input axial separation on the output multiplane QPI performance. (a) Average PCC values of the diffractive multiplane QPI processor outputs with different input lateral separation distances (r) covering {0,0.2R,0.4R,0.8R} and different input axial separation distances (Z) covering {128λm,64λm,32λm,16λm}. (b) The corresponding output examples of the diffractive multiplane QPI results.

AP_6_5_056003_f004.png

2.4.

Cross Talk among Imaging Channels

Ideally, our diffractive multiplane QPI processor should perform precise phase-to-intensity transformations for each input plane independently. However, accurately channeling the spatial information of individual object planes into their respective wavelength channels is challenging, as the features of the input objects positioned in the axial sequence can perturb the wave fields generated or modulated by the target object planes. This results in complex fields that, upon entering the diffractive processor, contain intermingled information from different object planes. Consequently, information from one object plane can negatively impact the imaging process of another, especially when the input phase objects laterally overlap, leading to cross talk among the imaging channels associated with different object planes. To delve deeper into the impact of this cross talk among the channels, we conducted a numerical analysis by individually testing each input sample plane across all five wavelengths. By placing a phase object in one of the five object planes and leaving the remaining planes vacant, we could directly assess how the phase information from one input plane, corresponding to a specific output wavelength channel, affects other output channels. From the visualization of the output quantitative phase reconstructions shown in Fig. 5, it was clear that when the inputs were laterally separated with r=R, the output images of the diffractive multiplane QPI processor at the target wavelength aligned well with the ground-truth images, and the signal leakage to the other wavelength channels was negligible. This result highlighted the diffractive processor’s proficiency in handling and mitigating cross talk between different wavelength channels. However, as the input separation distance decreased, for example, to r=0.4, a noticeable cross talk was observed across the different channels. This challenge became more pronounced when all the input objects were coaxially aligned at the center without any lateral separation (r=0), resulting in more significant cross talk as well as suboptimal quality of QPI reconstructions. These findings confirm the diffractive processor’s capability to correctly route the signals and mitigate the cross talk effectively, while acknowledging its limitations when the input objects present a notable lateral overlap across different axial planes.

Fig. 5

Cross-talk analysis of multiplane QPI under different input lateral separation distances. Output image matrix demonstrating the cross talk from one input plane to the output wavelength channels, represented by the off-diagonal images. Each row corresponds to a set of input (ground-truth) phase objects alongside the resulting diffractive output images. The diagonal images represent the diffractive output images at the target wavelengths.

AP_6_5_056003_f005.png

2.5.

Lateral Resolution and Phase Sensitivity Analysis

To gain deeper insights into the diffractive multiplane QPI processor’s capability to resolve phase images of input objects, we further investigated the lateral imaging resolution of our processor designs across different levels of input thickness range. To standardize our tests, we created binary phase grating patterns with a linewidth of 5.2λm, and selected the testing thickness range parameter Htest from {0.2, 0.6, 1} and the input lateral separation distance r from {0,0.4R,R}, as shown in Fig. 6. The results in Figs. 6(a) and 6(b) show that the diffractive QPI processors with M=5 input planes effectively resolved the test phase gratings with a linewidth of 5.2λm for both r=0 and r=0.4R, even with thickness ranges that were different compared to the training thickness range, such as Htest=0.2<Htr or Htest=1>Htr. In cases where r=0, i.e., the test objects are positioned coaxially and exhibit complete lateral overlap, the processor can still resolve the grating patterns with Htest=1 or Htest=0.6, as shown in Fig. 6(c). However, at a thinner thickness or a smaller testing phase contrast level, e.g., Htest=0.2, the resolution of diffractive QPI outputs became worse. The output examples revealed that the diffractive processor under r=0 falls short in reconstructing the last two input planes (i.e., P4 and P5). Our analyses revealed that the diffractive multiplane QPI designs could clearly resolve spatial phase features with a linewidth of at least 5.2λm across all five input planes, particularly when the input phase object had a thickness range parameter Htest>0.2.

Fig. 6

Lateral resolution and phase sensitivity analysis for the diffractive multiplane QPI processor designs. (a) Images of the binary phase grating patterns encoded within the phase channels of the input object, along with the r=R diffractive processor’s resulting output QPI signals (Φw) at the target input plane. The grating has a linewidth of 5.2λm, and the thickness range parameter (Htest) of the input phase object is selected from {0.2, 0.6, 1}. (b), (c) The same as (a), except for r=0.4R in panel (b) and r=0 in panel (c).

AP_6_5_056003_f006.png

2.6.

External Generalization Performance of Wavelength-Multiplexed Diffractive Processors for Multiplane QPI

The diffractive multiplane QPI processors reported so far were trained on a data set that included handwritten digits and grating-like spatial patterns. To further assess how well our diffractive multiplane QPI processors generalize to different types of spatial features, we conducted additional numerical analysis using Pap smear microscopy images. These images have significantly different spatial characteristics compared to our training data set. In addition to this, we used various thickness range parameters (Htest) including {0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8} with an aim to examine the diffractive QPI processor’s adaptability to new spatial features with previously unseen object thicknesses or phase contrasts, covering both Htest>Htr and Htest<Htr. These blinded test results are showcased in Fig. 7(b), revealing a decent agreement between the diffractive multiplane QPI results and the corresponding ground-truth images. We also calculated the image quality metrics across the entire Pap smear test data set [see Fig. 7(a)]. The average PCC was calculated as 0.921±0.004 when the testing thickness range matched the training condition, i.e., Htest=Htr=0.6. The QPI performance remained robust, with average PCC values of >0.8 from Htest=0.2 to Htest=1.4, while starting to exhibit more degradation when Htest>1.4. When Htest=1.8, the average PCC dropped to 0.540±0.113. Overall, these external generalization test results demonstrated that our diffractive multiplane QPI design is not limited to specific object types or phase features but can serve as a general-purpose multiplane quantitative phase imager for various kinds of objects.

Fig. 7

Results for testing the external generalization performance of the r=R diffractive multiplane QPI processor design using blind testing images from a new data set composed of Pap smear images. (a) PCC values of the diffractive multiplane QPI processor outputs as a function of the input thickness range. (b) Examples of the ground-truth phase images at different input planes, which are compared to their corresponding diffractive QPI output images.

AP_6_5_056003_f007.png

2.7.

Output Power Efficiency of Diffractive Multiplane QPI Processors

All our diffractive multiplane QPI processor designs presented so far were optimized without considering the output power efficiency, resulting in relatively low diffraction efficiencies, mostly lower than 0.1%. When the output power efficiency becomes a concern in a given diffractive processor design, an additional diffraction efficiency-related loss term60,67,68 can be introduced to the training loss function to balance the trade-off between task performance and signal-to-noise ratio. We used the same approach to achieve a balance between the QPI performance and the diffraction efficiency of the diffractive processor (see the Appendix for details). In Fig. 8, we present a comprehensive quantitative analysis of this trade-off between the multiplane QPI performance and the output diffraction efficiency. For this comparison, we used two designs with r=R and 0.4R (as shown in Fig. S2 in the Supplementary Material), which initially exhibit output diffraction efficiencies of 0.113±0.022% and 0.042±0.004%, alongside PCC values of 0.993±0.001 and 0.964±0.021%, respectively—these are corresponding to diffractive designs trained without any diffraction efficiency penalty terms. Maintaining the same structural parameters and the same training/testing data set, we retrained these wavelength-multiplexed diffractive multiplane QPI designs from scratch; this time, we incorporated varying degrees of diffraction efficiency penalty terms into the training loss functions, resulting in diffractive designs that demonstrated significantly enhanced output diffraction efficiencies. Figure 8(a) depicts the resulting PCC values of these new designs in relation to their output diffraction efficiencies. When compared to the original r=R design, these new designs showed an approximately 90-fold increase in the output diffraction efficiency, which reached up to an efficiency of 10.2±1.7%. This major output diffraction efficiency enhancement was achieved with a modest reduction in multiplane QPI performance, evidenced by PCC values decreasing to 0.842±0.023. Similarly, compared to the original r=0.4R design shown in Fig. S2 in the Supplementary Material, these new designs subjected to the diffraction efficiency penalty showcased an improved diffraction efficiency of up to 10.3±0.7%, with a marginal decrease in the PCC values that reach up to 0.805±0.042. Moreover, from the observation of the output examples in Fig. 8(b), the diffractive processors, even with enhanced output power efficiencies of >10%, still effectively reconstruct multiplane QPI images with a decent image quality. These results reveal that by properly incorporating an efficiency-related loss term into the optimization process, our wavelength-multiplexed diffractive multiplane QPI processors can be optimized to maintain an effective balance between the QPI performance and power efficiency, which is important for practical applications of the presented framework. This training approach to boost the output power efficiency was also used in our experimental proof-of-concept demonstration, which will be reported next.

Fig. 8

Analysis of the trade-off between the imaging performance and the output diffraction efficiency of diffractive multiplane QPI processors. (a) The PCC values of the diffractive multiplane QPI outputs with various levels of diffraction efficiency penalty, plotted as a function of the output diffraction efficiency values. Two sets of diffractive QPI designs using r=R and r=0.4R were trained and blindly tested. Specifically, purple markers (①, ②, ③ and ④) depict different r=R designs, where βEff=0 was used for ① and βEff=100, ηthresh=1%, 5%, 10% were used for ②, ③, and ④, respectively, in the training loss function [see Eqs. (15 and 16)]. Gold markers (①, ②, ③, and ④) represent their counterparts using r=0.4R. (b) Visualization of the diffractive output fields produced by diffractive QPI processor designs with different input lateral separation distances and various levels of diffraction efficiency-related penalty term.

AP_6_5_056003_f008.png

2.8.

Experimental Validation of a Wavelength-Multiplexed Diffractive Multiplane QPI Processor

We conducted an experimental demonstration of our diffractive multiplane QPI processor using the terahertz part of the spectrum. Because of the larger wavelength of terahertz radiation, the 3D fabrication and alignment of the resulting diffractive layers are easier compared to shorter wavelengths, such as IR and visible parts of the spectrum. As illustrated in Fig. 9(a), we created an input aperture to better control the illumination wavefront. The experimental configuration includes two input planes (P1 and P2), containing a phase-only object characterized by its thickness range parameter, empirically set as Htest=0.7. This setup serves as a proof-of-concept demonstration of our multiplane QPI system, wherein only one of two input planes contains a phase object at any given time. In our experiments, a diffractive multiplane QPI system composed of three phase-only dielectric diffractive layers (L1L3) was employed. This diffractive system converted the phase information of the input planes (axially separated by 20 mm) into an intensity distribution, captured at the output plane, where each illumination wavelength (λ1=0.8  mm, λ2=0.75  mm) was assigned to one axial plane performing QPI using phase-to-intensity transformations at each wavelength. Structural details of this experimental arrangement are provided in Fig. 9(a) and the accompanying Appendix.

Fig. 9

Experimental setup and validation of the diffractive multiplane QPI processor for phase-to-intensity transformations. (a) Illustration of a diffractive multiplane QPI processor composed of three diffractive layers (L1,L2,L3) to perform QPI operation on multiplane phase objects. (b) Thickness profiles of the optimized diffractive layers (upper row) and the photographs of their fabricated versions using 3D printing (lower row). (c) Photographs of the experimental setup, including the fabricated diffractive QPI processor. (d) Numerically simulated and experimentally measured intensity patterns at the output plane, compared with the ground-truth input objects, successfully demonstrating experimental phase-to-intensity transformations.

AP_6_5_056003_f009.png

To optimize our experimental multiplane QPI design, we synthesized objects to train the diffractive design through deep learning. Our training data set comprised 10,000 binary images of 4  pixels×4  pixels, with each image featuring two random pixels set to one and the remainder set to zero. These binary images were encoded into phase-only objects with a phase range of [0,αtr,wπ], where the phase contrast parameter values αtr,w reached 0.94 at λ1 and 1 at λ2. These phase contrast values were derived from the preset thickness range of [0,Htrλ2], where Htr=Htest=0.7. Throughout the training process, for each iteration, one input plane was designated for the placement of the phase object, while the other was left vacant. The optimized phase profiles of the diffractive layers are displayed in the upper column of Fig. 9(b). After the training, the resulting diffractive layers were 3D-printed; images of the fabricated layers are showcased in the lower column of Fig. 9(b). After 3D assembly and alignment of these fabricated layers, we employed a terahertz source and a detector to record the intensity distribution at the output plane. Detailed schematics and photographs of this experimental setup are presented in Fig. S4 in the Supplementary Material and Fig. 9(c), respectively.

In the experimental phase, our system was subjected to eight distinct phase objects (never seen during the training), with the testing thickness range parameter set to Htest=0.7. These objects were equally divided between the two input planes that are axially separated by 20 mm, i.e., 25.8λm, totaling four test phase objects per axial plane, and they were also fabricated using 3D printing. Figure 9(d) delineates the experimental output imaging results of the diffractive multiplane QPI processor, which align closely with our numerically simulated output patterns. The object phase profiles on both of the input planes were accurately transformed into intensity variations at the output plane, with each pixel clearly distinguishable and matching the expected ground-truth phase profiles. These experimental results demonstrate the proof-of-concept capability of our diffractive design in conducting QPI across multiple planes using wavelength multiplexing.

3.

Discussion

In the experimental proof-of-concept results presented earlier, we used relatively simpler patterns as input phase objects. To numerically assess the system’s capability to handle more structurally complex objects, akin to real-world conditions, we also investigated diffractive processor designs capable of retrieving multiplane QPI signals across the visible spectrum (400 to 650 nm). In these numerical analyses, we assumed that the diffractive layers were fabricated using two-photon polymerization-based 3D printing (Photonic Professional GT2, Nanoscribe GmbH, Germany), and a particular type of photoresist (IP-DIP, Nanoscribe GmbH)69 was selected as the diffractive layer material due to its high transparency and prevalent usage in the visible range. This diffractive design adapted the same configuration, training data, and methods used by the designs shown in Fig. S2 in the Supplementary Material, with the diffractive feature sizes scaled based on the operational wavelengths. Accordingly, the axial distance between the input phase objects was selected as 32λm, i.e., 16.8  μm. Fabrication constraints for such a diffractive processor operating in the visible range restrict the axial thickness levels to hundreds of nanometers for each layer, which can degrade the performance compared to the ideal numerical design. Therefore, we analyzed the performance of this visible diffractive processor design under limited phase bit depth selected from [16, 8, 6, 5, 4, 3, 2]. As shown in Fig. S5a in the Supplementary Material, the diffractive processor designed for the visible part of the spectrum maintains its QPI performance despite a reduction in the phase bit-depth of the diffractive layers. For example, for a 16-bit phase modulation design, the output PCC was 0.991 for the r=R case and 0.966 for the r=0.4R case. When the phase bit-depth was reduced to 4, the PCC values had only a minor decrease: 0.980 for the r=R case and 0.943 for the r=0.4R case. The results reported in Fig. S5b in the Supplementary Material further support these conclusions and demonstrate that our diffractive processor can maintain high-quality QPI performance under phase quantization limitations, down to a phase bit-depth of 4.

In practical implementations of diffractive QPI processors, another challenge is the mechanical misalignments between different layers, which can cause the optical waves to be modulated by the diffractive layers in an undesired way, leading to results deviating from their designed performance. To investigate this, we used the same configuration of the visible diffractive multiplane QPI processor and subjected the diffractive layers to various levels of random displacements/misalignments (Δ), either in the lateral directions (Δx,ΔyU[Δxy,test,Δxy,test]) or the axial direction (ΔzU[Δz,tr,Δz,tr]), sampled from uniform random distributions (U). As shown in Figs. S6a and S7a in the Supplementary Material, the performance of the diffractive processor (indicated by blue curves) peaks at an output PCC of 0.991 under the ideal alignment case (Δ=0), but it degrades with increasing lateral or axial misalignments. To address this misalignment sensitivity, a “vaccination” strategy can be applied during the optimization process by incorporating random misalignments into the numerical forward model of the system. Specifically, the 3D random displacements of the diffractive layers (Δx, Δy, and Δz) are modeled using random variables that change from iteration to iteration during the training process, providing resilience against such displacements with minimal performance loss. The efficacy of this strategy is demonstrated in Figs. S6 and S7 in the Supplementary Material, where new “vaccinated” diffractive multiplane QPI processors were trained under varying degrees of axial and lateral misalignments. As shown in Figs. S6a and S7a in the Supplementary Material, these vaccinated models (shown in green and orange curves) maintain good QPI performance across different levels of misalignments. For instance, when Δz,test=1.2λm, the PCC value for the vaccinated design remains at 0.979, while it decays to 0.427 for the unvaccinated baseline diffractive QPI model. Figures S6b and S7b in the Supplementary Material also illustrate the outputs of the vaccinated diffractive QPI design, maintaining a good agreement with the ground-truth phase images under various degrees of random misalignments. These analyses highlight the effectiveness of our vaccination strategy and the diffractive QPI processor’s capability to withstand unknown random misalignments.

In the results and analyses presented above, we have unveiled diffractive multiplane QPI processor designs utilizing wavelength multiplexing to encode the phase information of multiple input objects, which can be implemented through sequential imaging of different wavelength channels using, for example, a monochrome image sensor equipped with a spectral filter, each time adjusted to a unique wavelength; alternatively, a wavelength scanning light source can also be used for the same multiplane QPI. We would like to emphasize that our diffractive designs are not confined to multishot sequential image capture configurations, where the diffractive outputs for individual input planes are captured separately; our design framework can be further optimized to create a snapshot multiplane QPI system by devising the functionality of spectral filter arrays into the diffractive processor.65,70 This functionality allows the multiplane phase signals at the diffractive processor’s output to be partitioned, following a virtual filter-array pattern, enabling a monochrome image sensor to obtain signals from distinct object planes within a single frame. After a standard image demosaicing process, each QPI channel corresponding to a unique axial plane can be retrieved from a single intensity-only image.

It is crucial to highlight that our diffractive multiplane QPI design is tailored for a 3D stack of phase objects with weak scattering and absorption properties. This scenario meets the criterion for the first Born approximation,71 allowing the modeling of a 3D phase-only object using a discrete set of 2D phase modulation layers, which are assumed to be connected by free-space propagation and approximately uniform illumination at each axial plane. The diffractive optical processor, due to its capacity for performing arbitrary complex-valued linear transformations between an input and output FOV,59 emerges as a viable approach for phase reconstruction and QPI under the first Born approximation. As one increases the lateral overlap among the axial planes that contain the phase-only input objects, the 3D QPI problem starts to deviate from the first Born approximation due to successive object-induced unknown wavefront distortions on the other axial planes where other unknown objects are located, which makes the problem nonlinear due to the interaction among the scattered fields that represent the object information at different planes. This physical cross talk and the deviation from a linear coherent system approximation is at the heart of our QPI performance degradation observed for r=0 when compared to the performance of r=R designs; the latter diffractive designs provide a better fit to the first Born approximation and the resulting fields at the output FOV of a diffractive QPI processor can be approximated as a linear superposition of the individual fields resulting separately from each axial plane. Having emphasized these points in relationship to the first Born approximation, we should also note that our numerical forward model does not make any such approximations and in fact precisely models object-to-object cross-talk fields for each case, taking all these nonlinear terms as part of its analysis and training/testing reported in this paper.

To further increase the performance of quantitative phase images and the spectral multiplexing factor (M), one would require a deeper diffractive architecture with more trainable degrees of freedom. Both theoretical analyses and empirical studies established earlier56,58,59,61,62 have substantiated that increasing the total number of trainable diffractive features within a diffractive processor can improve its processing capacity and inference accuracy,62,72 also achieving significantly better diffraction efficiency at the output FOV. A particularly effective design strategy here involves increasing the number of layers rather than the number of diffractive features at each layer, which was proven to not only boost the diffraction efficiency but also to achieve a more optimal utilization of diffractive features by enhancing optical connectivity between successive layers.59,68,73 By increasing the number of diffractive layers (forming a deeper diffractive architecture), the performance of our wavelength-multiplexed diffractive QPI processor can be further enhanced to perform the desired phase-to-intensity transformations more accurately across an even larger number of axial planes and also facilitate the multiplane QPI reconstructions with even a higher spatial resolution.

In our work, the input depth information is encoded into a specific set of wavelength channels at the output plane, and the design of the wavelength assignments could affect the QPI reconstruction accuracy. Our wavelength assignments were informed by the understanding that the input planes/phase objects at the axially deeper positions (closer to the diffractive layers) are subject to more distorted illumination wavefronts due to the phase perturbations caused by the other phase objects in the front. Therefore, shorter wavelengths were assigned to the more difficult-to-reconstruct axial planes that are closer to the diffractive layers. This is also supported by previous work and empirical evidence, which revealed that a diffractive processor operating at shorter wavelengths can control larger degrees of freedom within the diffractive layers due to the diffraction limit of light, resulting in better diffractive processing capability.62,65,70 As a result, in our wavelength-multiplexed multiplane QPI approach, shorter wavelengths were assigned to the deeper planes to better mitigate the cross talk from earlier object planes and enhance the QPI performance at the output. We quantitatively analyzed the efficacy of this wavelength assignment strategy by comparing it with an alternative design that reverses the wavelength assignment order, i.e., longer wavelengths are assigned to the axially deeper planes in this alternative approach. Figure S8a in the Supplementary Material demonstrates that our wavelength assignment yields more uniform and higher output PCC values than this alternative diffractive design with a reversed wavelength assignment. This trend is further highlighted in the phase-error analysis, shown in Fig. S8b in the Supplementary Material, where our design achieved an average phase error of 4.1%, which is lower than the 5.0% phase error corresponding to the reversed wavelength assignment. These results confirm that our wavelength assignment strategy not only achieves a uniform performance across all the input planes, but also improves the phase accuracy of our multiplane QPI reconstructions.

As another key factor, we also investigated the QPI performance of our diffractive imager designs as a function of the illumination wavelength or bandwidth. For this quantitative analysis, we selected the object plane positioned in the middle of our object volume, which was designated to a mean wavelength of λm. As shown in Fig. S9a in the Supplementary Material, the peak QPI performance occurs at λtest=λm, i.e., when the training wavelength matches the testing wavelength, while the QPI performance relatively declines when the testing wavelength deviates from the training wavelength, i.e., λtestλm. The same trend can also be observed in the output examples shown in Fig. S9b in the Supplementary Material. We also tested the same QPI processor under broadband illumination, as shown in Fig. S9c in the Supplementary Material. The average PCC of the output QPI images reduced to 0.843 and 0.676 for a broadband spectral illumination that uniformly covers [0.995λm: 1.005λm] and [0.99λm: 1.01λm], respectively. The examples of test objects shown in Fig. S9c in the Supplementary Material further illustrate the success of the diffractive QPI processor under such broadband illumination, demonstrating its robustness and adaptability to different spectral conditions not covered during its training.

Note that in this work, we assumed spatially coherent illumination, which is common in the measurement of the phase information of objects, especially in tomography and microscopy applications. Nevertheless, previous studies on diffractive processors also reported that spatially incoherent or partially coherent illumination can be used in the optical forward model of a diffractive processor for the optimization of its inference.74,75 By doing so, one can further broaden the diffractive QPI processor’s applicability to scenarios where ideal coherent light sources are unavailable.

Our presented system is optimized for transparent objects commonly used in QPI. In terms of lateral resolution, our diffractive QPI processor could resolve spatial phase features with a linewidth of at least 5.2λm across all five input planes, which corresponds to 3  μm within the visible spectrum. To further increase the resolution and 3D volume of our diffractive QPI reconstructions, one could design a much wider and deeper diffractive architecture with significantly more degrees of freedom, which needs larger computational resources during training. It is important to note that our training process is a one-time effort, and inference is performed all-optically without any digital computation. By leveraging the versatility of our diffractive design, we can address the imaging needs of both thin, transparent objects and more complex 3D structures, making our technology potentially suitable for a wide array of scientific and industrial applications. Our diffractive optical processor design could also potentially support applications such as optical sectioning of thicker objects by collecting information from specific planes while optically filtering background signals. However, the current demonstrations reported in this work show that our processor is most effective in scenarios where the objects are sparsely distributed or partially overlapping, following a linear coherent system approximation at each wavelength.

Notably, the presented multiwavelength diffractive processors maintain their accuracy in reconstructing quantitative phase images for multiple distinct planes irrespective of potential variations in the intensity of the broadband light sources used for illumination. Furthermore, these diffractive optical processors are not limited to the terahertz spectrum. By choosing suitable nanofabrication techniques, including, e.g., two-photon polymerization-based 3D printing,7678 it is possible to scale these diffractive optical processors physically to operate across different segments of the electromagnetic spectrum, including visible and IR wavelengths. Such scalability and the passive nature of our diffractive processors pave the way for more efficient and compact on-chip phase imaging and sensing devices, promising a transformative impact for biomedical imaging/sensing and materials science.

Finally, we would like to note that compared to some of the earlier QPI work that utilized illumination wavelength and angular diversity to improve the lateral and/or axial resolution of phase microscopy and tomography,7981 the presented approach stands out in that its quantitative phase reconstructions are performed through passive light–matter interactions, without the need for a digital reconstruction algorithm, which saves image reconstruction time and computing power. The all-optical phase-to-intensity transformations demonstrated in this work are multiplexed over different planes of the sample volume using wavelength encoding, as demonstrated in Sec. 2 (Results).

4.

Appendix: Materials and Methods

4.1.

Optical Forward Model of Wavelength-Multiplexed Diffractive QPI Processors

To numerically simulate a diffractive optical processor, each diffractive layer was treated as a thin optical element that modulates the complex field of the incoming coherent light. The complex transmission coefficient t(xq,yq,zl;λ) at any point (xq,yq,zl) on the qth diffractive feature of the lth layer is determined by the local material thickness, hql, and can be described as

Eq. (7)

t(xq,yq,zl;λ)=exp(2πκhqlλ)exp(j2π(nnair)hqlλ).

In this equation, n(λ) and κ(λ) are the refractive index and extinction coefficient, respectively, of the chosen dielectric material at λ. These values correspond to the real and imaginary parts of the complex refractive index n˜(λ)=n(λ)+jκ(λ). For the experimentally tested diffractive multiplane QPI processor, n(λ) and κ(λ) were set based on the measurements from a terahertz spectroscopy system.63 As for the numerically analyzed diffractive multiplane QPI designs in the terahertz range, n(λ) was kept the same, while κ(λ) was set to 0. For the diffractive multiplane QPI designs within the visible range, n(λ) was set based on the dispersion of IP-DIP photoresist, and κ(λ) was set to 0, as the absorption of this material within the visible spectrum is negligible. The thickness h for each diffractive feature combines a constant hbase and a variable/learnable part hlearnable, as shown in

Eq. (8)

h=hlearnable+hbase,
where hlearnable is the adjustable thickness part of each diffractive feature, constrained within the range [0,hmax]. In all the diffractive designs demonstrated in the terahertz range, including the numerical-simulated diffractive models and the experimentally validated diffractive model, hmax is 1.4 mm, providing full phase modulation from 0 to 2π for the longest wavelength. The base thickness hbase, empirically set as 0.2 mm, provides the substrate (mechanical) support for all the diffractive features. In the diffractive QPI designs for the visible range, hmax was selected as 1400 nm, and the base thickness hbase was set as 1000 nm.

To simulate the light propagation of coherent optical fields in free space between the layers (including the input object planes, diffractive layers, and the output plane), we applied the angular spectrum approach.54 The field at the (l+1)th diffractive layer, modulated by its transmittance function t(x,y,zl+1;λ), is given by

Eq. (9)

uql+1(x,y,zl+1;λ)=t(x,y,zl+1;λ)F1{F{uql(x,y,zl+1;λ)}Hql(fx,fy,dm;λ)}.
Here, F{·} and F1{·} represent the 2D Fourier transform and its inverse operation, respectively. The transfer function Hql(fx,fy,dm;λ) of free-space propagation with a distance d between two successive layers is given by

Eq. (10)

Hql(fx,fy,d;λ)={exp{j2πdλ1(λfx)2(λfy)2}fx2+fy2<1λ20,fx2+fy21λ2
where fx and fy represent the spatial frequencies along the x and y directions, respectively.

In our numerical simulations for all the diffractive designs demonstrated in this paper, we chose a spatial sampling rate for the simulated complex fields at a period of 0.44λm. Similarly, the lateral size of the diffractive elements on each layer was selected at 0.44λm. The axial distance between consecutive layers, including both the diffractive layers and the input/output planes, was set at 6λm for the numerical designs depicted in Fig. S2 in the Supplementary Material and at 10λm for the design used in our experimental validations, as illustrated in Fig. 9(a).

4.2.

Numerical Implementation of Wavelength-Multiplexed Diffractive Multiplane QPI Processors

In our diffractive multiplane QPI processor design, multiple phase-only input objects are placed at different z positions, where z=z1,z2,,zM. Each object features a phase profile Ψw(x,y;λ) with a consistent amplitude across the plane. The transmission through each of these phase objects/input planes is defined as

Eq. (11)

t(x,y,zw;λ)=ejΨw(x,y;λ).

Initially, a broadband spatially coherent source sw (or u0) illuminates the front phase object plane. As the light propagates through axially stacked input planes, it is modulated by the phase objects and results in the complex field ul at each phase object plane. This field is calculated using the angular spectrum approach, postmodulation by the object transmittance t, which can be expressed as ul+1=t·F1{F{ul}Hl}. Finally, after traversing M phase object planes, a cumulative multispectral complex field iw (or uM) is formed, which contains the desired phase information of input objects.

The resulting field uM is then positioned at the entry point of the diffractive multiplane QPI processor. Consequently, the input field uM undergoes a sequence of diffractive layer modulations and secondary wave formations, as elaborated in the last subsection. This process ultimately results in a complex output field, denoted as ow(x,y)=uM+K(x,y,zK;λw), where K is the total number of the diffractive layers. Upon normalization with the reference signal (Refw) for each wavelength channel w, the resultant output QPI signals Φw can be obtained following Eq. (3).

For the diffractive QPI processor designs depicted in Fig. S2 in the Supplementary Material, both the input 2D FOVs distributed in the input 3D volume and the output FOV were designed to have a size of 73.5λm×73.5λm. These FOVs are discretized into 14  pixels×14  pixels, with each pixel having dimensions of 5.25λm×5.25λm. To ensure effective performance of the multiplane QPI task, every diffractive layer in this diffractive multiplane QPI processor contains 600×600 diffractive features, covering an area of 262.5λm×262.5λm. These diffractive QPI processors used in our numerical analyses operate in the terahertz spectral range, i.e., λ1=0.9  mm and λM=0.7  mm. For the diffractive QPI processors in the visible range, the wavelengths were selected as λ1=650  nm and λM=400  nm. In the diffractive design used for our experimental validation, shown in Fig. 9(a), the input and output FOVs share identical dimensions of 24.8λm×24.8λm. This space is divided into 4  pixels×4  pixels, resulting in each pixel being 6.2λm×6.2λm. Each diffractive layer in this design includes 120×120 diffractive features, extending over an area of 49.5λm×49.5λm. Here, the wavelengths for experimental validation were selected as λ1=0.8  mm and λ2=0.75  mm.

4.3.

Training Loss Function and Image Quality Metrics

For the optimization of our diffractive multiplane QPI processors, a loss function that utilizes normalized MSE was formulated to penalize the structural errors between the output quantitative phase image Φw(x,y) and its ideal counterpart Φw(GT)(x,y) over each wavelength channel that corresponds to an individual input plane. The loss computation for each channel is structured as follows:

Eq. (12)

Lw=1Nx(D)1Ny(D)x=1Nx(D)y=1Ny(D)|Φw(GT)(x,y)Φw(x,y)|2.

During the training stage, the overall loss function used for the diffractive multiplane QPI processor is determined by averaging these loss terms over all the M spectral channels, thereby facilitating a concurrent optimization process across all the wavelength channels. Consequently, the aggregate loss function can be expressed as

Eq. (13)

LQPI=1Mλ=1MαwLw,
where αw represents the dynamic channel balance weights assigned to each wavelength channel’s loss. This mechanism was designed to balance the performance among different wavelength channels during the training. Initialized as 1, αw undergoes adaptive adjustments in every training iteration, as per the subsequent equation,

Eq. (14)

αwmax(0.1×(LwLmean)+αw,0).

In this equation, Lmean symbolizes the average loss across all wavelength channels. According to this methodology, if a channel’s loss is relatively low compared to the average, αw will decrease automatically, thereby dynamically reducing the weight of that channel in the training process. Conversely, a higher-than-average loss in a particular wavelength channel leads to an increase in αw, amplifying the channel’s balance weight and intensifying the penalty on its output image performance.

In our experimental validation and our output diffraction efficiency-related analyses, we employed a modified loss function by further adding an output diffraction efficiency-related loss term, LEff, into the original loss function defined in Eq. (13), which is given by

Eq. (15)

L=LQPI+βEffLEff,
where βEff denotes the weight coefficient associated with LEff. LEff is defined as

Eq. (16)

LEff=ηthreshmin(η,ηthresh),
where ηthresh is the threshold set to maintain a decent output diffraction efficiency. To be specific, the efficiency penalty is activated only when η<ηthresh. During the training of the experimental model, the values of βEff and ηthresh were empirically set as 100 and 0.02, respectively; for the diffractive models trained with output diffraction efficiency penalty shown in Fig. 8(a), the values of ηthresh were selected as 1%, 5%, 10%, and the value of βEff was chosen as 100. η represents the output diffraction efficiency and is defined as

Eq. (17)

η=(x,y)S|o(x,y)|2(x,y)S|i(x,y)|2.

The linear correlation of the quantitative phase images Φw(x,y) produced by the diffractive multiplane QPI processor against their ground truth Φw(GT)(x,y) was evaluated using the PCC metric. For a specific wavelength channel, the PCC value is quantified using the following equation:

Eq. (18)

PCC=(Φw(x,y)Φw¯)(Φw(GT)(x,y)Φw(GT)¯)(Φw(x,y)Φw¯)2(Φw(GT)(x,y)Φw(GT)¯)2.

For evaluating the phase accuracy of our presented diffractive multiplane QPI processors, we calculated the normalized MAE values between the reconstructed output phase profiles and the ground truth, defined as

Eq. (19)

MAEPhase=1Nx(D)Ny(D)αtestπx=1Nx(D)y=1Ny(D)|Φw(GT)(x,y)Φw(x,y)|,
where the testing phase contrast parameter αtest was used to normalize the phase error based on the dynamic range of the input phase contrast.

4.4.

Training Data Preparation and Other Implementation Details

For training our diffractive multiplane QPI processors, we assembled a data set of 110,000 images, which can be divided into two categories: (1) 55,000 handwritten digit images from the original MNIST training set, and (2) 55,000 custom-designed images featuring a variety of patterns, such as gratings, patches, and circles, each with unique spatial frequencies and orientations.66 In the training phase, we generated each set of input objects by randomly choosing M images from this data set, with each image encoded into the phase channel of one of the M object planes, thus creating an input object stack for the multiplane phase-imaging task.

The numerical simulations and the training process for the diffractive multiplane QPI processors described in this study were carried out using Python (version 3.7.13) and PyTorch (version 2.5.0, Meta Platform Inc.). The Adam optimizer from PyTorch, with its default settings, was utilized. We set the learning rate at 0.001 and the batch size at 16. Our diffractive models underwent a 100-epoch training on a workstation equipped with an Nvidia GeForce RTX 3090 GPU, an Intel Core i9-11900 CPU, and 128 GB RAM. The training time for a 10-layer diffractive multiplane QPI processor design, as seen in Fig. S2 in the Supplementary Material, was roughly 12 days, which is a one-time design effort.

4.5.

Details of the Experimental Diffractive Multiplane QPI System

Our diffractive multiplane QPI design was tested using a terahertz continuous wave (CW) system, as illustrated in Fig. S4 in the Supplementary Material. This setup involved a terahertz source comprising a Virginia Diode Inc. WR9.0M SGX/WR4.3x2 WR2.2 modular amplifier/multiplier chain (AMC), paired with a corresponding diagonal horn antenna (Virginia Diode Inc. WR2.2). At the AMC’s input, a 10-dBm radio-frequency (RF) signal was introduced at 10.4166 or 11.1111 GHz (fRF1), which underwent a 36-fold multiplication, resulting in a 0.375 or 0.4 THz CW radiation output, equal to an illumination wavelength 0.8 or 0.75 mm, respectively. Additionally, for lock-in detection, the AMC’s output experienced modulation with a 1-kHz square wave. Situated at 10  mm from the horn antenna’s exit plane, the input aperture was 1.6 mm wide. An XY positioning stage, comprising two Thorlabs NRT100 motorized stages, moved a single-pixel mixer (Virginia Diode Inc. WRI 2.2) to conduct a 2D scan of the output intensity distribution, with a step size of 0.8 mm. The detector also received a 10-dBm RF signal at 11.1111 or 10.4166 GHz (fRF2) as the local oscillator, downconverting the output frequency to 1 GHz. This downconverted signal then passed through a low-noise amplifier (gain: 80 dBm) and a KL Electronics 3C40-1000/T10-O/O bandpass filter at 1 GHz (+/-10 MHz), reducing noise from undesirable frequency bands. After a linear calibration with an HP 8495B tunable attenuator, the signal was relayed to a Mini-Circuits ZX47-60 low-noise power detector. The lock-in amplifier (Stanford Research SR830) then processed the detector’s output voltage, using the 1-kHz square wave as a reference for linear scale calibration.

For the fabrication of the diffractive multiplane QPI system depicted in Fig. 9(b), an Objet30 Pro 3D printer by Stratasys was employed to print the diffractive design and the input aperture. To ensure alignment with our optical forward model for the experimental diffractive design, a 3D-printed holder was fabricated using the same printer. This holder facilitated the precise positioning of both the input aperture and the printed diffractive layers, securing their accurate 3D assembly.

Disclosures

The authors declare no conflict of interest.

Code and Data Availability

All the data and methods needed to evaluate the conclusions of this work are presented in the main text and Supplementary Material. Additional data can be requested from the corresponding author. The codes used in this work use standard libraries and scripts that are publicly available in PyTorch.

Author Contributions

A.O. conceived and initiated the research. C.S. and J.L. conducted numerical simulations and processed the resulting data. C.S., J.L., L.B., and Y.L. contributed to the PyTorch implementation of diffractive processor implementation. C.S. and T.G. conducted the experiment. All the authors contributed to the preparation of the manuscript. A.O. supervised the research.

Acknowledgments

This research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering (Grant No. DE-SC0023088).

References

1. 

G. Popescu, Quantitative Phase Imaging of Cells and Tissues, McGraw-Hill Education( (2011). Google Scholar

2. 

Y. Park, C. Depeursinge and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics, 12 578 –589 https://doi.org/10.1038/s41566-018-0253-x NPAHBY 1749-4885 (2018). Google Scholar

3. 

J. Park et al., “Artificial intelligence-enabled quantitative phase imaging methods for life sciences,” Nat. Methods, 20 1645 –1660 https://doi.org/10.1038/s41592-023-02041-4 1548-7091 (2023). Google Scholar

4. 

P. Marquet et al., “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett., 30 468 –470 https://doi.org/10.1364/OL.30.000468 OPLEDP 0146-9592 (2005). Google Scholar

5. 

T. Ikeda et al., “Hilbert phase microscopy for investigating fast dynamics in transparent systems,” Opt. Lett., 30 1165 –1167 https://doi.org/10.1364/OL.30.001165 OPLEDP 0146-9592 (2005). Google Scholar

6. 

G. A. Dunn and D. Zicha, “Phase-shifting interference microscopy applied to the analysis of cell behaviour,” in Symp. Soc. Exp. Biol., 91 –106 (1993). Google Scholar

7. 

S. K. Debnath and Y. Park, “Real-time quantitative phase imaging with a spatial phase-shifting algorithm,” Opt. Lett., 36 4677 –4679 https://doi.org/10.1364/OL.36.004677 OPLEDP 0146-9592 (2011). Google Scholar

8. 

G. Popescu et al., “Fourier phase microscopy for investigation of biological structures and dynamics,” Opt. Lett., 29 2503 –2505 https://doi.org/10.1364/OL.29.002503 OPLEDP 0146-9592 (2004). Google Scholar

9. 

B. Bhaduri et al., “Diffraction phase microscopy with white light,” Opt. Lett., 37 1094 –1096 https://doi.org/10.1364/OL.37.001094 OPLEDP 0146-9592 (2012). Google Scholar

10. 

G. Popescu, “Quantitative phase imaging of nanoscale cell structure and dynamics,” Methods Cell Biol., 90 87 –115 https://doi.org/10.1016/S0091-679X(08)00805-4 MCBLAG 0091-679X (2008). Google Scholar

11. 

R. Kasprowicz, R. Suman and P. O’Toole, “Characterising live cell behaviour: traditional label-free and quantitative phase imaging approaches,” Int. J. Biochem. Cell Biol., 84 89 –95 https://doi.org/10.1016/j.biocel.2017.01.004 IJBBFU 1357-2725 (2017). Google Scholar

12. 

P. Wang et al., “Nanoscale nuclear architecture for cancer diagnosis beyond pathology via spatial-domain low-coherence quantitative phase microscopy,” J. Biomed. Opt., 15 066028 https://doi.org/10.1117/1.3523618 JBOPFO 1083-3668 (2010). Google Scholar

13. 

R. Horstmeyer et al., “Digital pathology with Fourier ptychography,” Comput. Med. Imaging Graphics, 42 38 –43 https://doi.org/10.1016/j.compmedimag.2014.11.005 (2015). Google Scholar

14. 

Y. Rivenson et al., “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light Sci. Appl., 8 23 https://doi.org/10.1038/s41377-019-0129-y (2019). Google Scholar

15. 

K. C. M. Lee et al., “Quantitative phase imaging flow cytometry for ultra-large-scale single-cell biophysical phenotyping,” Cytometry A, 95 510 –520 https://doi.org/10.1002/cyto.a.23765 (2019). Google Scholar

16. 

G. Popescu et al., “Optical imaging of cell mass and growth dynamics,” Am. J. Physiol.-Cell Physiol., 295 C538 –C544 https://doi.org/10.1152/ajpcell.00121.2008 (2008). Google Scholar

17. 

L. Kastl et al., “Quantitative phase imaging for cell culture quality control,” Cytometry A, 91 470 –481 https://doi.org/10.1002/cyto.a.23082 (2017). Google Scholar

18. 

Z. El-Schich, A. Leida Mölder and A. Gjörloff Wingren, “Quantitative phase imaging for label-free analysis of cancer cells—focus on digital holographic microscopy,” Appl. Sci., 8 (7), 1027 https://doi.org/10.3390/app8071027 (2018). Google Scholar

19. 

D. Roitshtain et al., “Quantitative phase microscopy spatial signatures of cancer cells,” Cytometry A, 91 482 –493 https://doi.org/10.1002/cyto.a.23100 (2017). Google Scholar

20. 

H. Wang et al., “Early detection and classification of live bacteria using time-lapse coherent imaging and deep learning,” Light Sci. Appl., 9 118 https://doi.org/10.1038/s41377-020-00358-9 (2020). Google Scholar

21. 

T. Liu et al., “Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning,” Nat. Biomed. Eng., 7 1040 –1052 https://doi.org/10.1038/s41551-023-01057-7 (2023). Google Scholar

22. 

K. C. Lee et al., “Multi-ATOM: ultrahigh-throughput single-cell quantitative phase imaging with subcellular resolution,” J. Biophotonics, 12 e201800479 https://doi.org/10.1002/jbio.201800479 (2019). Google Scholar

23. 

O. Mudanyali et al., “Wide-field optical detection of nanoparticles using on-chip microscopy and self-assembled nanolenses,” Nat. Photonics, 7 247 –254 https://doi.org/10.1038/nphoton.2012.337 NPAHBY 1749-4885 (2013). Google Scholar

24. 

L. Zhong et al., “Formation of monatomic metallic glasses through ultrafast liquid quenching,” Nature, 512 177 –180 https://doi.org/10.1038/nature13617 (2014). Google Scholar

25. 

R. Stoian and J.-P. Colombier, “Advances in ultrafast laser structuring of materials at the nanoscale,” Nanophotonics, 9 4665 https://doi.org/10.1515/nanoph-2020-0310 (2020). Google Scholar

26. 

Z. Wang et al., “Spatial light interference microscopy (SLIM),” Opt. Express, 19 1016 –1026 https://doi.org/10.1364/OE.19.001016 OPEXFF 1094-4087 (2011). Google Scholar

27. 

T. H. Nguyen et al., “Gradient light interference microscopy for 3D imaging of unlabeled specimens,” Nat. Commun., 8 210 https://doi.org/10.1038/s41467-017-00190-7 NCAOBW 2041-1723 (2017). Google Scholar

28. 

G. Zheng et al., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys., 3 207 –223 https://doi.org/10.1038/s42254-021-00280-y (2021). Google Scholar

29. 

F. Charrière et al., “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett., 31 178 –180 https://doi.org/10.1364/OL.31.000178 OPLEDP 0146-9592 (2006). Google Scholar

30. 

Y. Sung et al., “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express, 17 266 –277 https://doi.org/10.1364/OE.17.000266 OPEXFF 1094-4087 (2009). Google Scholar

31. 

A. Matlock and L. Tian, “High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography,” Biomed. Opt. Express, 10 6432 –6448 https://doi.org/10.1364/BOE.10.006432 BOEICL 2156-7085 (2019). Google Scholar

32. 

M. H. Jenkins and T. K. Gaylord, “Three-dimensional quantitative phase imaging via tomographic deconvolution phase microscopy,” Appl. Opt., 54 9213 –9227 https://doi.org/10.1364/AO.54.009213 APOPAI 0003-6935 (2015). Google Scholar

33. 

Y. Rivenson et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl., 7 17141 https://doi.org/10.1038/lsa.2017.141 (2018). Google Scholar

34. 

Y. Jo et al., “Quantitative phase imaging and artificial intelligence: a review,” IEEE J. Sel. Top. Quantum Electron., 25 (1), 6800914 https://doi.org/10.1109/JSTQE.2018.2859234 IJSQEN 1077-260X (2018). Google Scholar

35. 

Y. Rivenson, Y. Wu and A. Ozcan, “Deep learning in holography and coherent imaging,” Light Sci. Appl., 8 85 https://doi.org/10.1038/s41377-019-0196-0 (2019). Google Scholar

36. 

Y. Jo et al., “Holographic deep learning for rapid optical screening of anthrax spores,” Sci. Adv., 3 e1700606 https://doi.org/10.1126/sciadv.1700606 STAMCV 1468-6996 (2017). Google Scholar

37. 

F. Yi, I. Moon and B. Javidi, “Automated red blood cells extraction from holographic images using fully convolutional neural networks,” Biomed. Opt. Express, 8 4466 –4479 https://doi.org/10.1364/BOE.8.004466 BOEICL 2156-7085 (2017). Google Scholar

38. 

V. Ayyappan et al., “Identification and staging of B-cell acute lymphoblastic leukemia using quantitative phase imaging and machine learning,” ACS Sens., 5 3281 –3289 https://doi.org/10.1021/acssensors.0c01811 (2020). Google Scholar

39. 

F. Wang et al., “Phase imaging with an untrained neural network,” Light Sci. Appl., 9 77 https://doi.org/10.1038/s41377-020-0302-3 (2020). Google Scholar

40. 

H. Chen et al., “Fourier Imager Network (FIN): a deep neural network for hologram reconstruction with superior external generalization,” Light Sci. Appl., 11 254 https://doi.org/10.1038/s41377-022-00949-8 (2022). Google Scholar

41. 

L. Huang et al., “Self-supervised learning of hologram reconstruction using physics consistency,” Nat. Mach. Intell., 5 895 –907 https://doi.org/10.1038/s42256-023-00704-7 (2023). Google Scholar

42. 

D. Pirone et al., “Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning,” Lab. Chip, 22 793 https://doi.org/10.1039/D1LC01087E LCAHAM 1473-0197 (2022). Google Scholar

43. 

H. Chen et al., “eFIN: enhanced Fourier imager network for generalizable autofocusing and pixel super-resolution in holographic imaging,” IEEE J. Sel. Top. Quantum Electron., 29 (4), 1 –10 https://doi.org/10.1109/JSTQE.2023.3248684 IJSQEN 1077-260X (2023). Google Scholar

44. 

D. Pirone et al., “Label-free liquid biopsy through the identification of tumor cells by machine learning-powered tomographic phase imaging flow cytometry,” Sci. Rep., 13 6042 https://doi.org/10.1038/s41598-023-32110-9 SRCEC3 2045-2322 (2023). Google Scholar

45. 

T. Nguyen et al., “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express, 25 15043 –15057 https://doi.org/10.1364/OE.25.015043 OPEXFF 1094-4087 (2017). Google Scholar

46. 

V. K. Lam et al., “Quantitative assessment of cancer cell morphology and motility using telecentric digital holographic microscopy and machine learning,” Cytometry A, 93 334 –345 https://doi.org/10.1002/cyto.a.23316 (2018). Google Scholar

47. 

Y. Wu et al., “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica, 5 704 –710 https://doi.org/10.1364/OPTICA.5.000704 (2018). Google Scholar

48. 

H. Byeon, T. Go and S. J. Lee, “Deep learning-based digital in-line holographic microscopy for high resolution with extended field of view,” Opt. Laser Technol., 113 77 –86 https://doi.org/10.1016/j.optlastec.2018.12.014 OLTCAS 0030-3992 (2019). Google Scholar

49. 

Y. Wu et al., “Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram,” Light Sci. Appl., 8 25 https://doi.org/10.1038/s41377-019-0139-9 (2019). Google Scholar

50. 

A. Matlock, J. Zhu and L. Tian, “Multiple-scattering simulator-trained neural network for intensity diffraction tomography,” Opt. Express, 31 4094 –4107 https://doi.org/10.1364/OE.477396 OPEXFF 1094-4087 (2023). Google Scholar

51. 

I. Kang, A. Goy and G. Barbastathis, “Dynamical machine learning volumetric reconstruction of objects’ interiors from limited angular views,” Light Sci. Appl., 10 74 https://doi.org/10.1038/s41377-021-00512-x (2021). Google Scholar

52. 

R. Liu et al., “Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields,” Nat. Mach. Intell., 4 781 –791 https://doi.org/10.1038/s42256-022-00530-3 (2022). Google Scholar

53. 

G. Choi et al., “Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography,” Opt. Express, 27 4927 –4943 https://doi.org/10.1364/OE.27.004927 OPEXFF 1094-4087 (2019). Google Scholar

54. 

X. Lin et al., “All-optical machine learning using diffractive deep neural networks,” Science, 361 1004 –1008 https://doi.org/10.1126/science.aat8084 SCIEAS 0036-8075 (2018). Google Scholar

55. 

Y. Luo et al., “Design of task-specific optical systems using broadband diffractive neural networks,” Light Sci. Appl., 8 112 https://doi.org/10.1038/s41377-019-0223-1 (2019). Google Scholar

56. 

D. Mengu et al., “Analysis of diffractive optical neural networks and their integration with electronic neural networks,” IEEE J. Sel. Top. Quantum Electron., 26 (1), 1 –14 https://doi.org/10.1109/JSTQE.2019.2921376 IJSQEN 1077-260X (2019). Google Scholar

57. 

D. Mengu et al., “Misalignment resilient diffractive optical networks,” Nanophotonics, 9 4207 –4219 https://doi.org/10.1515/nanoph-2020-0291 (2020). Google Scholar

58. 

O. Kulce et al., “All-optical information-processing capacity of diffractive surfaces,” Light Sci. Appl., 10 25 https://doi.org/10.1038/s41377-020-00439-9 (2021). Google Scholar

59. 

O. Kulce et al., “All-optical synthesis of an arbitrary linear transformation using diffractive surfaces,” Light Sci. Appl., 10 196 https://doi.org/10.1038/s41377-021-00623-5 (2021). Google Scholar

60. 

D. Mengu and A. Ozcan, “All-optical phase recovery: diffractive computing for quantitative phase imaging,” Adv. Opt. Mater., 10 2200281 https://doi.org/10.1002/adom.202200281 2195-1071 (2022). Google Scholar

61. 

B. Bai et al., “To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects,” eLight, 2 14 (2022). https://doi.org/10.1186/s43593-022-00021-3 Google Scholar

62. 

J. Li et al., “Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network,” Adv. Photonics, 5 016003 https://doi.org/10.1117/1.AP.5.1.016003 AOPAC7 1943-8206 (2023). Google Scholar

63. 

J. Li et al., “Unidirectional imaging using deep learning–designed materials,” Sci. Adv., 9 eadg1505 https://doi.org/10.1126/sciadv.adg1505 STAMCV 1468-6996 (2023). Google Scholar

64. 

D. Mengu et al., “Snapshot multispectral imaging using a diffractive optical network,” Light Sci. Appl., 12 86 https://doi.org/10.1038/s41377-023-01135-0 (2023). Google Scholar

65. 

C.-Y. Shen et al., “Multispectral quantitative phase imaging using a diffractive optical network,” Adv. Intell. Syst., 5 2300300 https://doi.org/10.1002/aisy.202300300 (2023). Google Scholar

66. 

M. S. Sakib Rahman and A. Ozcan, “Computer-free, all-optical reconstruction of holograms using diffractive networks,” ACS Photonics, 8 3375 –3384 https://doi.org/10.1021/acsphotonics.1c01365 (2021). Google Scholar

67. 

J. Li et al., “Spectrally encoded single-pixel machine vision using diffractive networks,” Sci. Adv., 7 eabd7690 https://doi.org/10.1126/sciadv.abd7690 STAMCV 1468-6996 (2021). Google Scholar

68. 

Y. Li et al., “Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network,” Light Adv. Manuf., 4 17 https://doi.org/10.37188/lam.2023.017 (2023). Google Scholar

69. 

S. Dottermusch et al., “Exposure-dependent refractive index of Nanoscribe IP-Dip photoresist layers,” Opt. Lett., 44 29 https://doi.org/10.1364/OL.44.000029 OPLEDP 0146-9592 (2019). Google Scholar

70. 

D. Mengu et al., “Snapshot multispectral imaging using a diffractive optical network,” Light Sci. Appl., 12 86 https://doi.org/10.1038/s41377-023-01135-0 (2023). Google Scholar

71. 

B. Chen and J. J. Stamnes, “Validity of diffraction tomography based on the first Born and the first Rytov approximations,” Appl. Opt., 37 2996 –3006 https://doi.org/10.1364/AO.37.002996 APOPAI 0003-6935 (1998). Google Scholar

72. 

Y. Li et al., “Universal polarization transformations: spatial programming of polarization scattering matrices using a deep learning-designed diffractive polarization transformer,” Adv. Mater., 35 2303395 https://doi.org/10.1002/adma.202303395 ADVMEW 0935-9648 (2023). Google Scholar

73. 

C.-Y. Shen et al., “All-optical phase conjugation using diffractive wavefront processing,” Nat. Commun., 15 4989 https://doi.org/10.1038/s41467-024-49304-y (2024). Google Scholar

74. 

M. S. S. Rahman et al., “Universal linear intensity transformations using spatially incoherent diffractive processors,” Light Sci. Appl., 12 195 https://doi.org/10.1038/s41377-023-01234-y (2023). Google Scholar

75. 

X. Yang et al., “Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks,” Adv. Photonics Nexus, 3 016010 https://doi.org/10.1117/1.APN.3.1.016010 (2024). Google Scholar

76. 

E. Goi et al., “Nanoprinted high-neuron-density optical linear perceptrons performing near-infrared inference on a CMOS chip,” Light Sci. Appl., 10 40 https://doi.org/10.1038/s41377-021-00483-z (2021). Google Scholar

77. 

H. Chen et al., “Diffractive deep neural networks at visible wavelengths,” Engineering, 7 1483 –1491 https://doi.org/10.1016/j.eng.2020.07.032 ENGNA2 0013-7782 (2021). Google Scholar

78. 

B. Bai et al., “Data‐class‐specific all‐optical transformations and encryption,” Adv. Mater., 35 (31), e2212091 https://doi.org/10.1002/adma.202212091 (2023). Google Scholar

79. 

C. Zuo et al., “Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a LED matrix,” Opt. Express, 23 14314 –14328 https://doi.org/10.1364/OE.23.014314 OPEXFF 1094-4087 (2015). Google Scholar

80. 

W. Luo et al., “Pixel super-resolution using wavelength scanning,” Light Sci. Appl., 5 e16060 https://doi.org/10.1038/lsa.2016.60 (2016). Google Scholar

81. 

X. Wu et al., “Wavelength-scanning pixel-super-resolved lens-free on-chip quantitative phase microscopy with a color image sensor,” APL Photonics, 9 016111 https://doi.org/10.1063/5.0175672 (2024). Google Scholar

Biography

Che-Yung Shen received his MS degree in optics and photonics from National Yang Ming Chiao Tung University, Taiwan. He is currently a PhD student in the Electrical and Computer Engineering Department at the University of California, Los Angeles (UCLA). His research interests include computational imaging, machine learning, and optics.

Jingxi Li received his PhD in the Electrical and Computer Engineering Department at UCLA. His work focuses on optical computing and information processing using diffractive networks and computational optical imaging for biomedical applications.

Yuhang Li received his BS degree in optical science and engineering from Zhejiang University, Hangzhou, China, in 2021. He is currently working toward his PhD in the Electrical and Computer Department at UCLA. His work focuses on the development of computational imaging, machine learning, and optics.

Tianyi Gan received his BS degree in physics from Peking University, Beijing, China, in 2021. He is currently a PhD student in the Electrical and Computer Engineering Department at UCLA. His research interests are terahertz source and imaging.

Langxing Bai is currently an undergraduate student in the Department of Computer Science at UCLA. His research interests are computational imaging and machine learning.

Mona Jarrahi is a professor and a Northrop Grumman Endowed chair in the Electrical and Computer Engineering Department at UCLA and the director of the Terahertz Electronics Laboratory. She has made significant contributions to the development of ultrafast electronic and optoelectronic devices and integrated systems for terahertz, infrared, and millimeter-wave sensing, imaging, computing, and communication systems by utilizing innovative materials, nanostructures, and quantum structures, as well as innovative plasmonic and optical concepts.

Aydogan Ozcan is the chancellor’s professor and the Volgenau chair for engineering innovation at UCLA and an HHMI professor at the Howard Hughes Medical Institute. He is also the associate director of the California NanoSystems Institute. He is elected a fellow of the National Academy of Inventors and holds >75 issued/granted patents in microscopy, holography, computational imaging, sensing, mobile diagnostics, nonlinear optics, and fiber-optics. He is also the author of 1 book and the co-author of >1000 peer-reviewed publications in leading scientific journals/conferences. He is an elected fellow of Optica, AAAS, SPIE, IEEE, AIMBE, RSC, APS, and the Guggenheim Foundation and is a Lifetime Fellow Member of Optica, NAI, AAAS, SPIE, and APS. He is also listed as a highly cited researcher by Web of Science, Clarivate.

CC BY: © The Authors. Published by SPIE and CLP under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Che-Yung Shen, Jingxi Li, Yuhang Li, Tianyi Gan, Langxing Bai, Mona Jarrahi, and Aydogan Ozcan "Multiplane quantitative phase imaging using a wavelength-multiplexed diffractive optical processor," Advanced Photonics 6(5), 056003 (25 July 2024). https://doi.org/10.1117/1.AP.6.5.056003
Received: 22 March 2024; Accepted: 2 July 2024; Published: 25 July 2024
Advertisement
Advertisement
KEYWORDS
Education and training

Image processing

Light sources and illumination

Signal processing

Diffraction

Phase imaging

Photonics

Back to Top