PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
1UCLA Samueli School of Engineering (United States) 2National Institute of Information and Communications Technology (Japan) 3Hamamatsu Photonics (Japan)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12019, including the Title Page, Copyright information, Table of Contents and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enables an exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single-shot two-dimensional (2D) optical imaging of transient scenes is indispensable for numerous areas of study. Among existing techniques, compressed optical-streaking ultrahigh-speed photography (COSUP) uses a cost-efficient design to endow ultra-high frame rates with off-the-shelf CCD and CMOS cameras. Thus far, COSUP’s application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction. To overcome these problems, we have developed a snapshot-to-video autoencoder (S2V-AE)—a new deep neural network that maps a compressively recorded 2D image to a movie. The S2V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data. Implemented in compressed ultrahigh-speed imaging, the S2V-AE enables the development of single-shot machine-learning assisted real-time (SMART) COSUP, which features a reconstruction time of 60 ms and a large sequence depth of 100 frames. SMART COSUP is applied to wide-field multiple-particle tracking at 20 thousand frames-per-second. As a universal computational framework, the S2V-AE is readily adaptable to other modalities in high-dimensional compressed sensing. SMART COSUP is also expected to find wide applications in applied and fundamental sciences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Aeronautics industry has pioneered safety from digital checklists to moving maps that improve pilot situational awareness and support safe ground movements. Today, pilots deal with increasingly complex cockpit environments and air traffic densification. Here we present an intelligent vision system, which allows real-time human-machine interaction in the cockpits to reduce pilot’s workload. The challenges for such a vision system include extreme change in background light intensity, large field-of-view and variable working distances. Adapted hardware, use of state-of-the-art computer vision techniques and machine learning algorithms in eye gaze detection allow a smooth, and accurate real-time feedback system. The current system has been over-specified to explore the optimized solutions for different use-cases. The algorithmic pipeline for eye gaze tracking was developed and iteratively optimized to obtain the speed and accuracy required for the aviation use cases. The pipeline, which is a combination of data-driven and analytics approaches, runs in real time at 60 fps with a latency of about 32ms. The eye gaze estimation error was evaluated in terms of the point of regard distance error with respect to the 3D point location. An average error of less than 1.1cm was achieved over 28 gaze points representing the cockpit instruments placed at about 80-110cm from the participants’ eyes. The angular gaze deviation goes down to less than 1° for the panels towards which an accurate eye gaze was required according to the use cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality (AR) near-eye displays promise new human-computer interactions that can positively impact people’s lives. However, the current generation of AR near-eye displays fails to provide ergonomic solutions that counter design trade-offs such as form factor, weight, computational requirements, and battery life. Unfortunately, these design trade-offs are significant obstacles on the path towards an all-day usable near-eye display. We argue that a new way of designing AR near-eye displays that remove active components from a near-eye display could be a key to solving trade-off related issues. We propose the beaming display,1 a new near-eye display system that uses a projector and an all passive wearable headset. In our proposal, we project images from a distance to a passive wearable near-eye display as we track the location of that near-eye display. This presentation will present the latest version of our prototype while we discuss the potential future directions for beaming displays
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Metasurfaces are the arrangement of the artificially fabricated nanoantenna, which can control light scattering characteristics in a compact manner. Thanks to their versatile functionalities, the applications of the metasurfaces have been studied to replace various optical devices, including imaging and AR/VR devices. In this talk, we will introduce our research on metasurface holograms and metalenses related to imaging and AR/VR. First, the metasurface that combines the Pancharatnam-Berry phase and the generalized Kerker effect is not limited to control either transmission or reflection side but allows light control over the entire space. Utilizing this platform, independent hologram images and beam deflections for transmission and reflection are demonstrated. Also, a quadrumer structure that vertically transmits light that is incident at a specific angle is designed. We present a device that reproduces different holograms depending on the angle of incidence by encoding the multiplexed four phase profiles with the detour phase principle. Next, in the doublet metalens scheme, one side corrects chromatic aberration and monochromatic aberrations, and the other side performs focusing and filtering of the three primary colors in the visible spectrum. This doublet metalens corrects the aberrations of the targeted colors while having a high numerical aperture (NA). Finally, the metalens eyepiece with a high numerical aperture can realize a compact system to combine a real scene and a virtual image. In addition, our metalens shows a wide field-of-view, which can overcome the flaws of existing AR devices. These metasurface applications would be upstanding solutions for optical display technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-dimensional entanglement with larger Hilbert spaces enable an encoding of more bits per photon and thus promise increased communication capacities over quantum channels. Quantum frequency combs, which are intrinsically multimode in the temporal and frequency degrees of freedom within a single spatial mode, naturally facilitating the generation and measurement of high-dimensional entanglement. Current challenges include the extension of well-known methods for two qubits to high-dimensional quantum systems and their application in entanglement experiments with photons. More specifically, the major challenge is the certification of high-dimensional entanglement by a number of accessible experimental measurements. In this paper, we increase the Hilbert space dimensionality and provide versatile tools for quantifying and certifying high-dimensional entanglement in a biphoton frequency comb. We quantify the time binned Schmidt number up to 18 and certify entanglement of formation with 1.89 ebits. We have demonstrated a 648- dimensional Hilbert spaces with time-frequency entanglement in a biphoton frequency comb, enabling a computational space up to 13 photonic qubits, and 6.28 bits/photon classical information capacity. This high-dimensional time frequency multimode quantum states of biphoton frequency comb significantly boosting the photon information capacity that is critical for large-scale quantum information processing. Biphoton frequency comb has indeed demonstrated an attractive and powerful approach towards achieving this fundamental goal with applications in high-dimensional quantum information processing, time-frequency cluster-state quantum computation, high-dimensional encoding in quantum networks, and high-dimensional quantum simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently new applications for single photon imaging detectors, are challenging algorithmic signal processing approaches due to increasing photon event rates. This research explores a potential solution of machine learning (ML) algorithms for data analysis and imaging with single photon timing detectors with 16 ×16 pixels and 60 ps timing resolution. This novel ML approach will accelerate the data processing pipeline, which must process huge volumes of data, up to 10 Gbps per detector, with hundreds of detectors in certain applications. The ML model processes the photon detector output, applying spatial/temporal clustering to improve the photon detector spatial resolution with a time constraint of 10 µs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chalcogenide phase change materials (PCMs) are a class of alloys exhibiting gigantic optical property contrast upon structural transition from an amorphous to a crystalline state. The structural transition is also nonvolatile and does not require constant power supply to maintain its optical state. These unique behaviors qualify PCMs as a novel functional material enabling various on-chip and free-space re-programmable optical computing network architectures. Here we present monolithic integration of PCMs with integrated photonics and metasurface optics leveraging standard silicon foundry facilities, and the demonstration of electrically programmable photonic devices for on-chip optical routing, memory, and computing functions
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A central goal of neuroscience is to link synaptic connectivity of neural circuits to produced dynamics and computations. Anatomical and functional connectivity within neural systems is asymmetric, which upon linearization gives rise to non-normal dynamics. Particular linear combinations of neurons that are involved in circuit function are canonically identified in systems neuroscience via PCA, which seeks subspaces which maximize variance. We have recently proposed Dynamical Components Analysis (DCA), which seeks subspaces of activity in which the mutual information between past and future activity (i.e., ‘the dynamic memory’) is largest. Here, we show that the presence of non-normality leads to a divergence between these subspaces and consequently, the importance of single neurons that are identified by each method. Applied to in-vivo electrophysiology recordings from diverse brain areas, subspaces of past-future mutual information are better able to predict animal and human behavior than subspaces of high variance. Finally, we discuss possible consequences of non-normality for the training and function of in-silico recurrent neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photonic Hardware Accelerators and Optical Computing I
Low latency, high throughput inference on Convolution Neural Networks (CNNs) remains a challenge, especially for applications requiring large input or large kernel sizes. 4F optics provides a solution to potentially accelerate CNN inferences with Fourier optics and the well-known convolution theorem. However, existing 4F CNN accelerators suffer from various limitations that make the implementation of a multi-channel, multi-layer CNN not scalable or even impractical. In this paper, we discuss the limitations of 4F CNN accelerators including the positive sensor readout, intensity-only modulation and slow modulation frequency and methods to address them. We also propose the channel tiling method that can address an important throughput and precision bottleneck of high-speed, massively-parallel optical 4F computing systems, not requiring any additional optical hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview recent progress in reservoir computing and decision making using complex laser dynamics. We demonstrate reservoir computing using a photonic integrated circuit with optical feedback. We introduce a scheme for multiple semiconductor lasers placed in parallel for improving the performance of reservoir computing. We also perform decision making for solving the multi-armed bandit problem using lag synchronization of chaos in mutually coupled semiconductor lasers
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional multiport interferometers based on MZI meshes suffer from component imperfections, which limit their scaling. We introduce two new designs that overcome this limitation: a 3-splitter MZI for generic errors and a broadband MZI+Crossing design for more realistic correlated errors. These architectures, motivated by the correspondence between SU(2) and the Riemann sphere, are more error tolerant than the standard MZI mesh and support progressive self-configuration. Numerical simulations reveal orders-of-magnitude error reductions compared to the standard MZI mesh; moreover, the mesh is asymptotically perfect: the matrix error decreases with mesh size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Visible Light Communication (VLC) cooperative system that supports guidance services and uses an edge/fog based architecture for wayfinding services is presented. The integrated dynamic navigation system consists of multiple transmitters (luminaries) which transmit the map information and path messages necessary for wayfinding. The luminaires used for downlink transmission are equipped with one of two types of controllers: mesh controllers or cellular controllers, which, respectively, forward messages to other devices in the vicinity or to the central manager. Mobile optical receivers, collect the data, extracts theirs location to perform positioning and, concomitantly, the transmitted data from each transmitter. Uplink transmission is implemented and the best route to navigate through venue calculated. Each luminaire, through VLC, reports its geographic position and specific information to the users, making it available for use. Bidirectional communication is implemented and the best route to navigate through venue calculated. Buddy wayfinding services are also considered. Results indicate that the system is able to perform not just the self-localization, but also infer the travel direction and interact with it, optimizing the route to a static or dynamic destination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate a novel closed-loop input design technique on the detection of particles in an imaging system such as a fluorescence microscope. The probability of misdiagnosis is minimized while constraining the input energy such that for instance phototoxicity is reduced. The key novelty of the closed-loop design is that each next input is designed based on the most recent information. Using updated hypothesis probabilities, the input energy distribution is optimized for detection such that unresolved pixels have increased illumination next image acquisition. As compared to conventional open-loop, the results show that (regions of) particles are diagnosed using less energy in the closed-loop approach. Besides the closed-loop approach being viable for the initialization of fluorescence microscopy measurements, it is the next step to sequential object segmentation for reliable and efficient product inspection in Industry 4.0.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an integrated photonic reservoir computing scheme that relies on the use of passive optical filters inside waveguide loops. The filters-in-a-loop nodes aim to provide an alternative all-optical non-linear activation function, circumventing previous power-hungry or bandwidth-limiting approaches. The non-linear transformation of the incoming optical signal relies, firstly on multiple spectral slicing and secondly on the non-linear mapping of phase information to the outputs’ amplitude, through the existence of resonant optical cavities. The efficiency of the approach has been evaluated numerically, through two demanding applications. Transmission impairment amendment of 224 Gbps PAM signals with superior to HD FEC performance for short-reach networks and secondly, MHz rate all-optical classification of time-stretched scans depicting objects with different spatial features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly stable frequency and timing standards are essential for deep-space missions and radio science. At the NASA Deep Space Network (DSN), these standards are distributed through a network of underground fiber cables to support several Goldstone antennas. Independently developed frequency-measuring instruments generate tremendous quantities of data to monitor and validate the antennas’ stringent frequency requirements. In this paper, we propose a lightweight processing tool capable of detecting disturbances on the frequency signal caused by DSN antenna motions. Our training data is sampled from the movement log of the antenna of interest and the generated data from the fiber optic metrology instrument linked to the antenna. We demonstrate that a convolutional neural network (CNN) model can achieve high accuracies on classifying instances of antenna movements and is an effective predictor when used iteratively on longer, variable stretches of metrology data. The simplicity, low training cost, and high accuracies of our model strongly suggest its efficacy in identifying and troubleshooting frequency disturbances caused by the antenna.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The speckle is a granular pattern observed at the end of a multimode fiber due to the interference of propagating modes. The speckle pattern will change in case of any perturbations at any point in the fiber and thus forms the basis of highly sensitive fiber specklegram sensors. Early researchers developed speckle interferometry and imaging tools to extract information from the speckle. However, general drawbacks in these approaches are difficulty in extracting information from non-linear speckle patterns and associated high computational costs. For this reason, we propose a machine learning-based solution to analyze speckle patterns in gold-coated multimode fibers for temperature sensing. We build a temperature-controlled environment on a 4 cm length of a copper rod using a heater with a proportional-integral-derivative control. We coat around a 10 nm gold layer on a 4 cm stripped region of a 50/125 μm multimode optical fiber using the DC sputtering process. The gold-coated fiber is then attached to the copper rod. We find that a thin layer of gold on the fiber surface enhances the overall temperature sensing sensitivity. We record speckle pattern images through a CCD camera over a range of 25.9-28.9° C with roughly 150 images per temperature setting. Using time-efficient and straightforward machine learning models such as k-nearest neighbors and ridge regression, we achieve temperature sensing accuracy as high as 96% and mean squared error as low as 0.000844° C. We anticipate that the current work can pave the way for efficient sensing applications in photo-acoustics, strain, and pressure measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks have enabled many applications in artificial intelligence and neuromorphic computing ranging from scientific computing, intelligent communications, security etc. Neural networks implemented in on digital platforms are limited in speed and energy efficiency. Neuromorphic (i.e., neuron-isomorphic) photonics aims to build processors in which optical hardware mimic neural networks in the brain. These processors promise orders of magnitude improvements in both speed and energy efficiency over purely digital electronic approaches. However, integrated optical neural networks are much smaller (hundreds of neurons) than electronic implementations (tens of millions of neurons). This raises a question: what are the applications where sub-nanosecond latencies and energy efficiency trump the sheer size of processor? We provide an overview of neuromorphic photonic systems and their real-world applications to machine learning and neuromorphic computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our work applies neutral networks to solving forward and inverse problems in diffuse reflectance spectroscopy. Firstly, a neural network forward model is trained with Monte Carlo data so as to predict diffuse reflectance from given optical parameters. Secondly, an inverse model based on the neural network forward model is built to solve for optical parameters from diffuse reflectance, modified from the traditional Monte Carlo-based inverse model. Validation of our inverse model on experimentally measured phantom data is investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Artificial Intelligent Pixel (aiPixel), a novel in-pixel neural network based on the deconstruction of traditional sensor pixels is presented for the first time for detection, processing and digitizing at the pixel resulting in overall data reduction, background suppression, reduction of the tremendous amount of data generated by current sensors and cameras and implementation of 3rd wave artificial intelligence (AI). The aiPixel) is based on detector pixel innovations that enable vector matrix processing of detected optical signals as compared with the current arithmetically processed photocurrents. Preliminary aiPixel architectures that utilize hardware elements based on UCF’s optoelectronic synapse technology and emulations show the capability for high accuracy for a high number of neurons or subpixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microwave photonics and neuromorphic photonics are two parallel research areas which have simultaneously emerged at the forefront of next generation processors. These fields, while initially independent, are naturally converging to a combined silicon photonic platform. An optical processing approach yields wide bandwidth, low latency, and dense interconnection. These photonic systems are capable of supporting applications previously unfeasible. Systems such as photonic cancellers, photonic blind source separation, photonic recurrent neural networks for RF fingerprinting, and photonic neural networks for nonlinear dispersion compensation. This paper will focus on the convergence of microwave photonics and neuromorphic photonics towards an RF optimized machine learning solution. Additionally, this paper investigated the RF noise performance of neuromorphic photonic front-end. The results indicated poor RF performances, leading to the proposal of a balanced linear front-end for noise figure reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine Learning offers the potential to revolutionize the inverse design of complex nanophotonic components. In addition to the celebrated numerical techniques, such as Finite-Element and Finite-Difference methods, Machine Learning can predict the scattering properties of complex optical components using artificial neural networks. A benefit of this neural network based approach is that it is especially suited for inverse design. The goal of inverse design is to obtain an optimal optical component that closely matches a desired optical response. The process consists of two steps. The first step trains a neural network to predict the response of an optical system based on its input parameters, such as material and geometric parameters. In the second step, the neural network is used to optimize these input parameters to obtain a desired optical response. An interesting problem can arise in this second step. In resonant systems, the optimization of the input parameters leads to gradient descent in a highly oscillatory loss landscape. The loss landscape contains a lot of local minima in which the gradient descent can get stuck, leading to a sub-optimal optical design. To address this problem, we propose a physics-inspired algorithm which adds the Fourier transform of the desired spectrum to the optimization procedure. The additional Fourier transform provides a way to differentiate between different minima such that the global minimum can be found. We investigate our approach on the transmission and reflection spectra of Fabry-Perot resonators and Bragg reflectors. We show that our method successfully finds optimal optical designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-structured films with surface riblets are used to reduce aerodynamic drag. This is especially relevant on fast and large objects such as on aircraft wings, where they are installed to increase efficiency (e.g., reduce fuel consumption). Their fuel reduction efficiency depends directly on the structural integrity of the films. Therefore, we propose a photometric inspection tool, a hardware setup and tailored analysis algorithms, which detect typical defects of riblet micro-structures occurring during their operational lifetime. We propose two inspection approaches to analyze the micro-structures, (i) a statistical data processing method and (ii) a machine learning algorithm based on convolutional autoencoders. We tested both inspection approaches on rendered and real world data of riblet films on airplane elements, carbon-fiber parts of race cars, and wind turbine blades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a new emerging machine learning mechanism, optical diffractive deep neural network (OD2NN) has been intensively studied recently due to its incomparable advantages on speed and power efficiency. However, the training process of the OD2NN with traditional back-propagation (BP) method is always time-consuming. Here, we introduce the biologically plausible training methods without feedback to accelerate the training process of the hybrid OD2NN. Direct feedback alignment (DFA), error-sign-based DFA (sDFA) and direct random target projection (DRTP) are utilized and evaluated in the training process of the hybrid OD2NN respectively. For the hybrid OD2NN with 20 diffractive layers, about 160× (DFA; CPU), 30× (DFA; GPU), 170× (sDFA; CPU), 32× (sDFA; GPU), 158× (DRTP; CPU) and 32× (DRTP; GPU) accelerations are achieved respectively without significant loss of accuracy, compared with the training process using BP method on CPU or GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Physics informed neural networks (PINNs) solve supervised learning tasks by incorporating partial differential equations describing the governing physics. We use a PINN based on Maxwell’s equations in the frequency domain to predict the electrical permittivity parameters, and hence the electric fields, of circular split-ring resonator-based metamaterials thereby bypassing full-wave solutions based on finite-element methods. We demonstrate the use of a PINN for the inverse prediction of the electrical permittivity of a circular split ring resonator metamaterial given the spatial e-field distributions at the resonant frequency. Our results validate the PINN framework for the inverse retrieval of permittivities from field distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-of-flight (ToF) measurement sensor is widely used to measure 3D depth. However, conventional ToF cameras has relatively low resolution compared to the RGB camera. To utilize such depth image of low resolution effectively in various research fields, low resolution depth image of ToF sensor should be increased. Meanwhile, ToF sensor also has problem related saturated pixels and missing pixels. A novel depth completion algorithm is proposed in this paper to improve the 3D depth image of ToF camera in terms of image resolution and abnormal pixels. Specifically, low resolution depth images and relatively high resolution RGB images are fused in machine learning architecture. The performance of this proposed depth completion algorithm is demonstrated under various experimental conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.