Diffractive deep neural networks utilize successive, spatially-engineered diffractive surfaces trained via deep learning to all-optically process input optical fields based on a desired transformation. We present the design of a broadband diffractive network that can all-optically perform a large set of arbitrary complex-valued linear transformations, wherein the input/output data are encoded at W different wavelength channels, each assigned to a unique linear transformation, covering, e.g., W>100-2000. This broadband diffractive visual processor may foster the development of all-optical visual processors with substantial data bandwidth and parallel computation capabilities, creating intelligent machine vision systems for all-optical processing of multi-color or hyperspectral objects/scenes.
We introduce a unidirectional imager that facilitates polarization-insensitive and broadband operation using isotropic, linear materials. This design comprises diffractive layers with hundreds of thousands of learnable phase features, trained using deep learning to enable power-efficient, high-fidelity imaging in the forward direction (A-to-B), while simultaneously inhibiting optical transmission and image formation in the reverse direction (B-to-A). We experimentally tested our designs using terahertz radiation, providing a good match with our simulations. Furthermore, we demonstrated a wavelength-selective unidirectional imager that performs unidirectional imaging along A-to-B at a predetermined wavelength, while at a second wavelength, the unidirectional operation switches from B-to-A.
We present a universal polarization transformer composed of diffractive layers and linear polarizer arrays, capable of all-optically synthesizing a large set of complex-valued polarization scattering matrices between the polarization states at different positions within its input and output fields-of-view. We numerically demonstrated that our deep learning-based design could synthesize 10,000 different spatially-encoded polarization scattering matrices within a single diffractive volume. Using wire-grid polarizers and 3D-printed diffractive layers, we also demonstrated an experimental proof-of-concept by achieving an all-optical polarization permutation operation with 16. Our innovative framework can inspire new devices with versatile polarization control capabilities in various fields.
We present a diffractive network (D2NN) design to all-optically perform distinct transformations for different input data classes. This class-specific transformation D2NN processes the input optical field, generating the output optical field whose amplitude or intensity closely approximates the transformed/encrypted version of the input using a transformation matrix specific to the corresponding data class. The original information can be recovered only by applying the class-specific decryption keys to the corresponding class at the diffractive network's output field-of-view. The efficacy of the presented class-specific image encryption framework was validated both numerically and experimentally, tested at 1550 nm and 0.75 mm wavelengths.
We directly transfer optical information around arbitrarily-shaped, fully-opaque occlusions that partially or entirely block the line-of-sight between the transmitter and receiver apertures. An electronic neural network (encoder) produces an encoded phase representation of the optical information to be transmitted. Despite being obstructed by the opaque occlusion, this phase-encoded wave is decoded by a diffractive optical network at the receiver. We experimentally validated our framework in the terahertz spectrum by communicating images around different opaque occlusions using a 3D-printed diffractive decoder. This scheme can operate at any wavelength and be adopted for various applications in emerging free-space communication systems.
KEYWORDS: Free space optics, Diffusers, Education and training, Deep learning, 3D modeling, Optical transmission, Neural networks, Mathematical optimization, Light sources and illumination, Image transmission
We report an optical diffractive decoder with an electronic encoder network to facilitate the accurate transmission of optical information of interest through unknown random phase diffusers along the optical path. This hybrid electronic-optical model was trained via supervised learning, and comprises a convolutional neural network-based encoder and jointly-trained passive diffractive layers. After their joint-training using deep learning, our hybrid model can accurately transfer optical information even in the presence of unknown phase diffusers, generalizing to new random diffusers never seen before. We experimentally validated this framework using a 3D-printed diffractive network, axially spanning <70λ, where λ=0.75mm is the illumination wavelength.
We present data class-specific transformation diffractive networks that all-optically perform different preassigned transformations for different input data classes. The visual information encoded in the amplitude, phase, or intensity channel of the input field is all-optically processed and transformed/encrypted by the diffractive network. The amplitude or intensity of the resulting field approximates the transformed/encrypted input information using the transformation matrix specifically assigned for that data class. We experimentally validated this class-specific transformation framework by designing and fabricating two diffractive networks at 1550nm and 0.75mm wavelengths. The presented framework provides a fast, secure, and energy-efficient solution to data encryption applications.
We present the first demonstration of unidirectional imaging that permits image formation along only one direction, from an input field-of-view to an output field-of-view, while eliminating optical transmission in the reverse direction. This unidirectional imager is formed by diffractive layers composed of isotropic linear materials spatially-coded with thousands of phase features optimized using deep learning. We experimentally tested our diffractive design using a terahertz setup and 3D-printed diffractive layers, which revealed a good agreement with our numerical simulations. The designs of these diffractive unidirectional imagers are compact and can be scaled to operate at different parts of the electromagnetic spectrum.
We explore the parallel information processing capacity of a broadband diffractive optical network and demonstrate that a single diffractive network could perform a large group of arbitrarily-selected, complex-valued linear transformations between its input and output fields-of-view at different wavelengths, accessed sequentially or simultaneously. Through deep learning-based training of the thickness values of its diffractive features, we demonstrate that a wavelength-multiplexed diffractive processor can implement W>180 complex-valued linear transformations with a negligible error when its number of trainable diffractive features approaches 2W×I×O, where I and O refer to the number of input and output pixels, respectively.
Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning, our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, where λ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media.
We present a diffractive camera that performs class-specific imaging of target objects, while all-optically and instantaneously erasing the objects from other classes during light propagation through thin diffractive layers, maximizing privacy preservation. We experimentally validated this class-specific camera design by 3D-printing the resulting diffractive layers (optimized through deep learning) and selectively imaging MNIST handwritten digits using the assembled camera system under terahertz radiation. The presented object class-specific camera is passive and does not require external computing power, providing a data-efficient solution to task-specific and privacy-aware modern imaging applications.
Large-scale linear operations are the cornerstone for performing complex computational tasks. Using optical computing to perform linear transformations offers potential advantages in terms of speed, parallelism, and scalability. Previously, the design of successive spatially engineered diffractive surfaces forming an optical network was demonstrated to perform statistical inference and compute an arbitrary complex-valued linear transformation using narrowband illumination. We report deep-learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of arbitrarily selected, complex-valued linear transformations between an input and output field of view, each with Ni and No pixels, respectively. This broadband diffractive processor is composed of Nw wavelength channels, each of which is uniquely assigned to a distinct target transformation; a large set of arbitrarily selected linear transformations can be individually performed through the same diffractive network at different illumination wavelengths, either simultaneously or sequentially (wavelength scanning). We demonstrate that such a broadband diffractive network, regardless of its material dispersion, can successfully approximate Nw unique complex-valued linear transforms with a negligible error when the number of diffractive neurons (N) in its design is ≥2NwNiNo. We further report that the spectral multiplexing capability can be increased by increasing N; our numerical analyses confirm these conclusions for Nw > 180 and indicate that it can further increase to Nw ∼ 2000, depending on the upper bound of the approximation error. Massively parallel, wavelength-multiplexed diffractive networks will be useful for designing high-throughput intelligent machine-vision systems and hyperspectral processors that can perform statistical inference and analyze objects/scenes with unique spectral properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.