PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Giovanni Volpe,1 Joana B. Pereira,2 Daniel Brunner,3 Aydogan Ozcan4
1Göteborgs Univ. (Sweden) 2Karolinska Institute (Sweden) 3Institut Franche-Comte Electronique Mecanique Thermique et Optique (France) 4Univ. of California, Los Angeles (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12204, including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Historically, there are many options to improve image quality that are each derived from the same raw ultrasound sensor data. However, none of these historical options combine multiple contributions in a single image formation step. This invited contribution discusses novel alternatives to beamforming raw ultrasound sensor data to improve image quality, delivery speed, and feature detection after learning from the physics of sound wave propagation. Applications include cyst detection, coherence-based beamforming, and COVID-19 feature detection. A new resource for the entire community to standardize and accelerate research at the intersection of ultrasound beamforming and deep learning is summarized (https://cubdl.jhu.edu). The connection to optics with the integration of ultrasound hardware and software is also discussed from the perspective of photoacoustic source detection, reflection artifact removal, and resolution improvements. These innovations demonstrate outstanding potential to combine multiple outputs and benefits in a single signal processing step with the assistance of deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-loss single-mode optical coupling is a fundamental tool for most photonic networks, in both, classical and quantum settings. Adiabatic coupling can achieve highly efficient and broadband single-mode coupling using tapered waveguides and it is a widespread design in current 2D photonic integrated circuits technology. Optical power transfer between a tapered input and the inversely tapered output waveguides is achieved through evanescent coupling, and the optical mode leaks adiabatically from the input core through the cladding into the output waveguides cores. We have recently shown that for advantageous scaling of photonic networks, unlocking the third dimension for integration is essential. Two-photon polymerization (TPP) is a promising tool allowing dynamic and precise 3D-printing of submicrometric optical components. Here, we leverage rapid fabrication by constructing the entire 3D photonic chip combining one (OPP) and TPP with the (3+1)D flash-TPP lithography configuration, saving up to ≈ 90 % of the printing time compared to full TPP-fabrication. This additional photo-polymerization step provides auxiliary matrix stability for complex structures and sufficient refractive index contrast ∆n ≈ 5×10−3 between core-cladding waveguides and propagation losses of 1.3 dB/mm for single-mode propagation. Overall, we confront different tapering strategies and reduce total losses below ∼ 0.2 dB by tailoring coupling and waveguides geometry. Furthermore, we demonstrate adiabatic broadband functionality from 520 nm to 980 nm and adiabatic couplers with one input and up to 4 outputs. The scalability of output ports here addressed can only be achieved by using the three-spatial dimensions, being such adiabatic implementation impossible in 2D.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light, detection and ranging (LiDAR), has been emerging as a powerful tool for applications with accurate and reliable perception requirements, e.g., autonomous driving which needs a combination of long-range and high spatial resolution together with a real-time performance. Processing the raw LiDAR data, which is a large-dimensional unstructured 3D point cloud, is computationally costly due to the nature of the algorithms used for processing the point clouds. In particular, the neural networks employed for LiDAR data processing comprise several layers, for each of which multiplications of matrices with large sizes need to be performed. In this case, graphics processing units (GPUs) cannot be used as real-time standalone devices for hardware acceleration because they have high execution time due to their dependency on a central processing unit (CPU) for data offloading and scheduling the execution of the algorithms used to process point clouds. To address the aforementioned challenges, we propose an efficient co-design of an analog neural network (ANN) and a hybrid CMOS-Photonics platform for LiDAR systems. The proposed architecture exploits the high bandwidth and low latency of optical computation to significantly improve the computational efficiency. In particular, in our proposed architecture, a CMOS control chip integrated with a photonic broadcast-and-weight architecture is interfaced with LiDAR to perform real-time data processing and high-dimensional matrix multiplications. Moreover, by processing the raw LiDAR data in the analog domain, the proposed hybrid electro-optic computing platform minimizes the number of data converters in LiDAR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The prohibitively large inclusions of micro-ring resonators, interconnected waveguide crossbar arrays, and multi-port multi-mode interferometers components demonstrated in the latest integrated photonic neural network hardware accelerators, pose a significant scaling issue for implementing large number of neural connections required to accurately represent the cognitive functions of a biological brain. In this investigation we utilize phase-change chalcogenide material GST to replicate a non-volatile synaptic weight with built-in memory functionality by employing metamaterial design principles for wavelength-division multiplexing photonic architectures. The transmission response of the optimized GST metamaterial gives rise to contrast ratios of 6dB in both positive and negative weighting values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photonic tensor core circuits have been widely explored as possible hardware accelerators for the next generation of machine learning applications, due to the large bandwidth, low latency, and energy saving that light has. Many architectures have been presented, especially exploiting photonic integrated circuits. However, most of the proposed solutions lack some features, such as integration, scalability, or energy saving. In this paper, we review the major achievements in recent years, showing how high integration can lead to better performance, but it could also limit the scalability of the overall system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photonic reservoir computing is an emerging topic due to the possibility to realize very fast devices with minimal training effort. We will discuss the reservoir computing performance of memory cells with a focus on the impact of delay lines and the interplay between coupling topology and performance for various benchmark tasks. We will further show that additional delayed input can be beneficial for reservoir computing setups in general, as it provides an easy tuning parameter, which can improve the performance of a reservoir on a range of tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decision-making through artificial neural networks with minimal latency is critical for numerous applications such as navigation, tracking, and real-time machine action systems. This requires machine learning hardware to process multidimensional data at high throughput. Unfortunately, handling convolution operations, the primary computational tool for data classification tasks, obeys challenging runtime complexity scaling laws. However, homomorphically implementing the convolution theorem in a Fourier optics display light processor can achieve a non-iterative O(1) runtime complexity for data inputs beyond 1,000 × 1,000 large matrices. Following this approach, here we demonstrate data streaming multi-kernel image batching using a Fourier Convolutional Neural Network (FCNN) accelerator. We show image batch processing of large-scale matrices as 2 million dot product multiplications performed by a digital light processing module in the Fourier domain. Furthermore, we further parallelize this optical FCNN system by exploiting multiple spatially parallel diffraction orders, achieving a 98x throughput improvement over state-of-the-art FCNN accelerators. A comprehensive discussion of the practical challenges associated with working at the edge of system capabilities highlights the problem of crosstalk and resolution scaling laws in the Fourier domain. Accelerating convolution by exploiting massive parallelism in display technology brings non-Van Neumann-based machine learning acceleration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient photonic hardware integration of neural networks can benefit us from the inherent properties of parallelism, high-speed data processing and potentially low energy consumption. In artificial neural networks (ANN), neurons are classified as static, single and continuous-valued. On contrary, information transmission and computation in biological neurons occur through spikes, where spike time and rate play a significant role. Spiking neural networks (SNNs) are thereby more biologically relevant along with additional benefits in terms of hardware friendliness and energy-efficiency. Considering all these advantages, we designed a photonic reservoir computer (RC) based on photonic recurrent spiking neural networks (SNN) i.e. a liquid state machine. It is a scalable proof-of-concept experiment, comprising more than 30,000 neurons. This system presents an excellent testbed for demonstrating next generation bio-inspired learning in photonic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In fluorescence microscopy, an external source of excitation light is required for photon emission and thereby sample visualization. Even though fluorescence imaging has provided a paradigm shift for cell biology and other disciplines, the sample might suffer due to high excitation light intensities, and spurious signals originating from autofluorescence. Bioluminescence imaging, on the contrary, does not need an external source of light for photon emission and visualization, bypassing the effects of autofluorescence, phototoxicity and photobleaching. This renders bioluminescence microscopy as an ideal tool for long term imaging. A major limitation of bioluminescence, compared to fluorescence imaging, is the low quantum yield of the bioluminescent proteins, which requires long exposure times and large collecting wells. Here, we work towards universal tools to overcome the main limitations of bioluminescence imaging: low signal/noise (SNR) imaging. To enhance spatiotemporal resolution, we have designed an optimized setup that boosts the optical efficiency and combine the photon starved, low SNR output with deep learning based content aware reconstruction methods. We trained a UNet architecture neural network with augmented fluorescent experimental data to denoise low SNR bioluminescent images. In addition, we trained a subpixel convolutional network with synthetic light field data to perform 3D reconstruction from a single photographic exposure without the presence of autofluorescence. Furthermore, we compare the reconstruction time and quality improvement with classical deconvolution methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technical innovations in image acquisition methods with higher spatial, temporal, and functional resolution have increased imaging data production rates tremendously. This poses a significant challenge to radiologists, who are limited by image processing and understanding capabilities of the human brain. Hence, the critical bottleneck for medical diagnosis is no longer the acquisition of images, but their timely and accurate interpretation. Facing this challenge, artificial intelligence (AI) will have a major impact on how we will practice radiology in the future: AI will revolutionize imaging interpretation and protocoling, reduce radiation exposure and contrast agent dosage, streamline patient scheduling, and support efficient communication of clinically meaningful imaging information to referring physicians and their patients. Hence, it is evident that AI will re-define all aspects of the radiology profession by making radiologists better and faster at what they do. Here, innovative approaches to machine learning on large multidimensional spatiotemporal data play a key role for improving image understanding in biomedicine. Current accomplishments in the presenter’s research projects address scientific challenges in machine learning, including explainable artificial intelligence, machine-supported image annotation, human-machine complementarity, and unsupervised exploratory data visualization. Clinical applications include automatic analysis of chest radiographs, imaging biomarkers for breast cancer and brain tumor diagnosis, non-invasive bone stability prediction in osteoporosis, novel methods for imaging of neuroinflammatory disease, and recent breakthroughs in brain network connectivity analysis for the diagnosis of neurologic disorders. An outlook to clinical deployment and quantitative evaluation of artificial intelligence solutions in radiology confirms the broad applicability of the presented methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INTRODUCTION: Early Onset Alzheimer’s Disease (EOAD, <65 years) and Frontotemporal Dementia (FTD) are common forms of early-onset dementia. Therefore, there is a need to establish accurate diagnosis and to obtain markers for disease tracking. We combined supervised and unsupervised machine learning (ML) to discriminate between EOAD and FTD patients.
METHODS: We included 3T-T1 MRI of 203 subjects under 65 years old: 66 healthy controls (CTR, age: 55.0 ± 8.4 years), 85 EOAD patients (age: 57.3 ± 6.1 years) and 52 FTD patients (age: 57.9 ± 4.8 years). We obtained subcortical gray matter volumes and cortical thickness (CTh) regional measures using FreeSurfer. For ML, we performed a Principal Component Analysis (PCA) of all volumes and CTh values. Then, the first principal component (PC) was introduced into a Support Vector Machine (SVM). Overall performance was assessed using k-fold cross-validation.
RESULTS: Our algorithm had an accuracy of 87.2 ± 14.2 % in the CTR vs EOAD classification, 80.8 ± 20.4% for CTR vs FTD, 66.5 ± 12.9 % for EOAD vs FTD and 65.2 ± 10.6% when discriminating the three groups. We used the weights of the first PC to create disease-specific patterns.
CONCLUSION: By using a single feature that combines information from CTh and subcortical volumes, our algorithm classifies CTR, EOAD and FTD with good accuracy. We suggest that this approach can be used as a feature reduction strategy in ML algorithms while providing interpretable atrophy patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffuse infiltrative glioma are considered as a systemic brain disorder and produce alterations on cerebral functional and structural integrity beyond the tumor location. These alterations are the result of the dynamic interplay between large-scale neural circuits. Describing the nature of these interactions has been a challenging task yet important for glioma disease evolution. Modern dynamic graph network theory techniques and control theory applied to these structural and functional networks opens a new research avenue for understanding the dynamical properties and differences between healthy controls and glioma patients. It has been shown that controllability is relevant for providing the mechanistic explanation of how the brain navigates between cognitive states. We believe that it is also relevant for describing the connectomic alterations in glioma and the differences among subtypes and healthy controls. The nodes that are needed to control these networks and influence them to any state are called driver nodes. We determined the driver nodes of the Default-Mode Network (DMN) for resting-state functional connectivity (FC) and diffusion-MRI-based structural connectivity (SC) (comprising edge-weight (EW) and fractional anisotropy (FA)) networks in isodehydrogenase mutated (IDHmut) and wildtype (IDHwt) patients and healthy controls. Our results show that healthy controls have a better controllability for both FC and SC, and that structural connectomic dynamical aberrations are more pronounced in glioma patients than functional connectomic alterations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial microswimmers are active particles designed to mimic the behavior of living microorganisms. The adaptive behavior of the latter is based on the experience they gain through interactions with the environment. They are also subjected to Brownian motion at these length scales which randomizes their position and propulsion direction making it a key feature in the adaptation process. However, artificial systems are limited in their ability to adapt to such noise and environmental stimuli. In this work, we combine artificial microswimmers with a reinforcement learning algorithm to explore their adaptive behavior in a noisy environment. These self-thermophoretic active particles are propelled and steered by generating thermal gradients on their surface with a tightly focused laser beam. They are also imaged under a microscope in real-time to monitor their dynamics. With such a versatile platform capable of real-time control and monitoring, we demonstrated the solution to a standard navigation problem under the inevitable influence of Brownian motion by introducing deep reinforcement learning, specifically deep-Q-learning. We also identified different noises in the system and how they affected the learning speed and navigation strategies picked up by the microswimmer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adversarial sensing is a self-supervised, learning-based approach for solving inverse problems with stochastic forward models. The basic idea behind adversarial sensing is that one can use a discriminator to compare the distributions of predicted and observed measurements. The feedback from the discriminator thus allows one to reconstruct a signal from observations from stochastic forward models without solving for any the forward model’s unknown latent variables. While adversarial sensing requires no training data, it can be modified to incorporate pretrained deep generative models for use as priors. This paper highlights some of our recent work on applying adversarial sensing to imaging through turbulence and to long-range sub-diffraction limited imaging with Fourier ptychography. For a longer and more detailed discussion of our methods please see.1
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.