We report diffractive optical networks designed through a task-specific training process to classify and reconstruct spatially-overlapping phase images. Trained with ~550-million unique combinations of spatially-overlapping phase-encoded handwritten digits (MNIST), our blind testing achieves >85.8% accuracy for all-optical, simultaneous classification of two overlapping phase images of new/unseen handwritten digits. We also demonstrate the reconstruction of these phase images based on a shallow electronic neural network that uses as its input the highly-compressed optical signals synthesized by the diffractive network with ~20-65 times less number of pixels. This framework might find applications in computational imaging, on-chip microscopy and quantitative phase imaging fields.
We report a single-pixel machine vision framework based on deep learning-designed diffractive surfaces to perform a desired machine learning task. The object within the input field-of-view is illuminated with a broadband light source and the subsequent diffractive surfaces are trained to encode the spatial information of the object features onto the power spectrum of the diffracted light that is collected by a single-pixel detector in a single-shot. We experimentally demonstrated the all-optical inference capabilities of this single-pixel machine vision platform by classifying handwritten digits using 3D-printed diffractive layers and a plasmonic nanoantenna-based time-domain spectroscopy setup operating at THz wavelengths.
We utilize diffractive optical networks to design small footprint, passive pulse engineering platforms, where an input terahertz pulse is shaped into a desired output waveform as it diffracts through spatially-engineered transmissive surfaces. Using 3D-printed diffractive networks designed by deep learning, various terahertz pulses with different temporal widths are experimentally synthesized by controlling the amplitude and phase of the input pulse over a wide range of frequencies. Pulse width tunability was also demonstrated by changing the layer-to-layer distance of a 3D-printed diffractive network or by physically replacing 1-2 layers of an existing network with newly trained and fabricated diffractive layers.
We present a diffractive network, trained for pulse engineering to shape input pulses into desired optical waveforms. The synthesis of square-pulses with various widths was experimentally demonstrated with 3D-fabricated passive diffractive layers that control both the amplitude and phase profile of the input terahertz pulse across a wide range of frequencies. Pulse-width tunability was also demonstrated by altering the layer-to-layer distances of a diffractive network. Furthermore, the modularity of this framework was demonstrated by replacing part of an already-trained network with newly-trained layers to tune the width of the output terahertz pulse, presenting a Lego-like physical transfer learning approach.
We report a broadband diffractive optical network that can simultaneously process a continuum of wavelengths. To demonstrate its success, we designed and experimentally validated a series of broadband networks to create single/dual passband spectral filters and a spatially-controlled wavelength de-multiplexer that are composed of deep learning-designed diffractive layers to spatially and spectrally engineer the output light. The resulting designs were 3D-printed and tested using a terahertz time-domain-spectroscopy system to demonstrate the match between their numerical and experimental output. Broadband diffractive networks diverge from intuitive/analytical designs, creating unique optical systems to perform deterministic tasks and statistical inference for machine learning applications.
Using deep learning-based training of diffractive layers we designed single-pixel machine vision systems to all-optically classify images by maximizing the output power of the wavelength corresponding to the correct data-class. We experimentally validated our diffractive designs using a plasmonic nanoantenna-based time-domain spectroscopy setup and 3D-printed diffractive layers to successfully classify the images of handwritten-digits using a single-pixel and snap-shot illumination. Furthermore, we trained a shallow electronic neural network as a decoder to reconstruct the images of the input objects, solely from the power detected at ten distinct wavelengths, also demonstrating the success of this platform as a task-specific, single-pixel imager.
We present a diffractive deep neural network-based framework that can simultaneously process a continuum of illumination wavelengths to perform a specific task that it is trained for. Based on this framework, we designed and 3D printed a series of optical systems including single and double pass-band filters as well as a spatially-controlled wavelength de-multiplexing system using a broadband THz pulse as input, revealing an excellent match between our numerical design and experimental results. The presented optical design framework based on diffractive neural networks can be adapted to other parts of the spectrum and be extended to create task-specific metasurface designs.
The differences in the swimming behavior of sperm cell populations that carry the opposite sex chromosome have been an important topic of research, aiming to shed more light on the seemingly random process of gender determination at conception. Earlier studies on human sperm cells resulted in a misconception that, Y-chromosome bearing sperm cells swim faster than X-chromosome bearing sperm cells as they carry a lighter payload. This has been clarified with more recent studies using modern computer-aided semen analysis (CASA) systems and improved sex-sorting techniques, showing that the velocity parameters of the two sperm populations exhibit similar values. CASA systems typically rely on conventional optical microscopes however, where the trade-off between spatial resolution and field-of-view and poor depth resolution necessitate confining the sperm cells into shallow chambers which limit their 3D motion. Alternatively, dual-view on-chip holographic imaging offers a unique capability to image free-swimming sperm cells across a large volume (~1.8 μl) and depth (~0.6 mm) in 3D. Operating our platform at 300 fps, we have comparatively analyzed the complete 3D motion characteristics of 235 X-sorted and 289 Y-sorted free-swimming bovine sperm cells, which include the head translation and spin as well as the 3D flagellar beating. While there was no significant difference in the velocity parameters, it was observed that the Y-sorted sperm had a stronger preference for helical trajectories compared to X-sorted sperm with a higher linearity. Comparatively studying the kinematic responses to the surrounding chemicals and ions could help better understand the reasons behind these observed differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.