PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12390, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method, Speckle Flow SIM, to super-resolve every frame of a dynamic scene from a sequence of diffraction-limited images at different acquisition timepoints. Speckle Flow SIM uses a fixed speckle-structured illumination to encode the super-resolved information into measurements and a neural space-time model to exploit the temporal redundancy for the dynamic scene reconstruction. We built a simple, inexpensive setup for Speckle Flow SIM and experimentally demonstrated 1.88× super-resolution for a dynamic scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present our recent study combined multi-armed bandits algorithm in reinforcement learning with spontaneous Raman measurements for the acceleration of the measurements by designing and generating optimal illumination pattern “on the fly” during the measurements while keeping the accuracy of the diagnosis. Here accurate diagnosis means that a user can determine an allowance error rate δ a priori to ensure that the diagnosis can be accurately accomplished with probability greater than (1 −δ)×100%. We present our algorithm and our simulation studies using Raman images in the diagnosis of follicular thyroid carcinoma, and show that this protocol can accelerate in speedy and accurate diagnoses faster than the point scanning Raman microscopy that requires the full detailed scanning over all pixels. The on-the-fly Raman image microscopy is the first Raman microscope design to accelerate measurements by combining one of reinforcement learning techniques, multi-armed bandit algorithm utilized in the Monte Carlo tree search in alpha-GO. Given a descriptor based on Raman signals to quantify the degree of the predefined quantity to be evaluated, e.g., the degree of cancers, anomaly or defects of materials, the on-the-fly Raman image microscopy evaluates the upper and lower confidence bounds in addition to the sample average of that quantity based on finite point illuminations, and then the bandit algorithm feedbacks the desired illumination pattern to accelerate the detection of the anomaly, during the measurement to the microscope. The realization of the programmable illumination microscope using a spatial light modulator will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report the use of conditional generative adversarial network (cGAN) for restoring undersampled images captured in free-space angular-chirp-enhanced delay (FACED) microscopy. We show that this deep-learning approach allows the wider imaging field of view (FOV) along FACED axis, without substantially sacrificing the imaging resolution, photon-budget and speed even with lower density of scanning foci. This study could show the potential of further extending the applicability of FACED imaging to a wider range of biological applications that require extended FOV imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an unprecedented, generative deep learning model (named beGAN) in reconstructing batch-effect-free quantitative phase image (QPI). By employing the high-throughput microfluidic multimodal imaging flow cytometry platform (i.e. multi-ATOM), our model demonstrated a robust QPI prediction from brightfield on various lung cancer cell lines (>800,000 cells). With batch-free QPI, biophysical phenotypes of cells are unified across batches and a significant improvement from 33.61% to 91.34% is achieved on the cross-batches cancer cell lines classification. This work unveil an avenue on overcoming batch effect with deep learning at single-cell imaging level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The functional meaning associated with neuronal activity in the mammalian brain and sensory systems remains to be fully understood. Exploring this area of neuroscience requires high-speed 3D imaging operating at >1 kHz volumetric scan rates with sub-cellular resolution, as neuronal signals propagate on sub-millisecond time scales. Additionally, since these studies must be performed in vivo, care must be taken to avoid invasive or damaging methods. Multi-photon imaging allows for non-invasive studies that deeply penetrate brain tissue, but has traditionally been limited to volumetric imaging between 10 to 100 Hz. We propose an improvement upon these systems with the novel imaging modality 2-photon Line Excitation and Array Detection (2p-LEAD) microscopy. 2p-LEAD is built on the main concept in our previous work where we developed single photon LEAD microscopy operating at 0.8 million FPS for 3D flow cytometry. In 2p-LEAD, we scan a 1035 nm excitation line of 2.4 μm x 220 μm (1/e² beam intensity diameter) at the focal plane. The resulting fluorescence is collected by a 16-channel linear PMT array. With a scanning mirror, we scan the line over a 140 μm x 160 μm FOV at 3,000 FPS, creating a frame of 16 x 320 pixels. Here we will present the design and imaging capabilities of our current 2p-LEAD instrument. This system lays the groundwork for higher speed imaging at 125 kHz frame rates with an acoustooptic deflector replacing the scanning mirror. When combined with vertical scanning, we will be able to volumetrically image at sub-millisecond time scales to allow for in vivo calcium imaging of the visual cortex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system presented here is an evolution of the recently introduced spectro-temporal laser imaging by diffractive excitation (SLIDE) (1) microscopy technique. In order to excite endogenous fluorescence, a new flexible and fiber-based laser source at 780 nm was developed. The fiber-based FDML-MOPA (2) was amplified to high peak and average powers by rare-earth Erbium fiber amplifiers. Afterwards, broadband quasi-phase-matched frequency-doubling using a Fan-out PPLN crystal was employed. The output is a 10 nm wide swept, pulsed laser around 780 nm with a pulse peak power of 150 W, 44 ps pulse duration and a pulse repetition rate of 82 MHz (250 pulses at 347 kHz sweep rate). The sweep rate is converted to line scans by a diffraction grating and sent to a microscope for two-photon excitation of UV-excited dyes or endogenous autofluorescence. For detection, the signal is captured with a 4 GS/s high speed digitization card leading to 2 kHz fluorescence lifetime imaging (FLIM) acquisition.
In this work, we present first images of 780 nm SLIDE obtained at 2 kHz frame-rate. Through the additional use of an piezo objective scanner, we are able to perform 3D imaging at 20 Hz volume rate. We have also used this novel system for high-speed LiDAR imaging at a frequency of 2 kHz, using the recently introduced SLIDE-based time-stretch LiDAR approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization of the spatiotemporal dynamics of propagation is fundamental to understanding dynamic processes ranging from action potentials to electromagnetic pulses, the two ultrafast processes in biology and physics, respectively. Here, we demonstrate differentially enhanced compressed ultrafast photography to directly visualize propagations of passive current flows at approximately 100 m/s along internodes from Xenopus laevis sciatic nerves and of electromagnetic pulses at approximately 5×107 m/s through lithium niobate. The spatiotemporal dynamics of both propagation processes are consistent with the results from computational models, demonstrating that our method can span these two extreme timescales while maintaining high phase sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent anti-Stokes Raman scattering is an extremely powerful non-linear optical (NLO) microscopy technique for label-free vibrational imaging allowing for chemical characterization of biological samples in their native state. We introduce here video-rate wide-field signal generation and acquisition over a large field of view (tens of micrometers) allowing for real-time investigation of fast biological dynamics. To this aim, our innovative approach employs an amplified femtosecond ytterbium laser source delivering high energy (≈ µJ) pulses in the near-infrared at 1035-nm central wavelength and 2-MHz repetition rate from which pump and Stokes beams are generated. Narrowband pump pulses with ≈1.1 nm bandwidth (10 cm−1) guarantee sufficient spectral resolution for the vibrational Lorentian peaks. Broadband Stokes pulses in the 1100 to 1500 nm range are produced via supercontinuum generation in 10-mm YAG crystal and amplified in a non-collinear optical parametric amplifier (NOPA). This allows us to acquire hypercubes that cover the entire fingerprint region of the molecular vibrational spectrum, the richest in chemical information. Our results pave the way for future clinical diagnostics applications with video-rate imaging capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vascular stenosis caused by atherosclerosis can lead to platelet activation and aggregation in thrombosis. However, the efficacy of antiplatelet drugs under stenosis is not well understood due to the lack of analytical tools. Here we demonstrate a new method combining optofluidic time-stretch quantitative phase microscopy and a 3D stenosis chip to enable highspeed, high-resolution, label-free imaging of circulating platelet aggregates under atherogenic flow conditions. Interestingly, our findings indicate that the proposed high-speed on-chip optofluidic imaging is a powerful tool for studying platelet biology, antiplatelet drug screening, and developing therapeutic strategies for patients with atherosclerotic diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging flow cytometry (IFC) is widely accepted as a generic method for population analysis of even huge particle collections. Combined with tailored optical setups, on-chip sample management and task-specific data analysis, it uses modern imaging modalities for high-throughput applications. In our work, we demonstrate the capabilities of a generic microfluidic chip concept, which can be easily customized for applications in conventional and 3D imaging flow cytometry, particle sorting and Brownian motion analysis of nanoparticles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoluminescence lifetime imaging of upconverting nanoparticles is increasingly featured in recent progress in optical thermometry. Despite remarkable advances in photoluminescent temperature indicators, existing optical instruments lack the ability of wide-field photoluminescence lifetime imaging in real time, thus falling short in dynamic temperature mapping. Here, we have developed single-shot photoluminescence lifetime imaging thermometry (SPLIT), which is developed from a compressed-sensing ultrahigh-speed imaging paradigm. Using the core/shell NaGdF4:Er3+,Yb3+/NaGdF4 upconverting nanoparticles as the lifetime-based temperature indicators, we apply SPLIT in longitudinal wide-field temperature monitoring beneath a thin scattering medium. SPLIT also enables video-rate temperature mapping of a moving biological sample at single-cell resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neuro-surgery is challenged by the difficulties of determining brain tumor boundaries during excisions. Optical coherence tomography is investigated as an imaging modality for providing a viable contrast channel. Our MHz-OCT technology enables rapid volumetric imaging, suitable for surgical workflows. We present a surgical microscope integrated MHz-OCT imaging system, which is used for the collection of in-vivo images of human brains, with the purpose of being used in machine learning systems that shall be trained to identify and classify tumorous tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an ultrafast swept source for optical coherence tomography (OCT) at a central wavelength of 1050 nm with a 86 nm bandwidth. Based on low noise supercontinuum dynamics and time stretch, this akinetic swept source operates at 10 MHz. The fast sweep rate and the low noise of the source enable high-speed imaging at a wavelength suitable for biological tissue (eye, skin). Such a light source presents a significant potential in achieving a large bandwidth beyond the limitations of current high-speed swept-source technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blood glucose (BG) level is one of the cardiovascular indicators that must be kept within a certain range. The importance of BG management is rising as clinical results have recently been reported that a vicious cycle of spikes and drops in BG levels, called a BG roller coaster, causes vascular dysfunction, increasing the risk of cardiovascular diseases. To investigate how elevated BG affects blood vessels, we photoacoustically monitored time-dependent changes in vascular diameters after intraperitoneal glucose injection in normal rats. Every five minutes, vascular networks were visualized using high-speed photoacoustic microscopy, their vascular diameters were calculated, and finally, the changes in vascular diameter were quantified. Arterioles constricted as the BG level increased, and then recovered as the BG level saturated. In the meantime, venules maintained their diameters. These results show that sudden transition to a hyperglycemic state may cause the arterioles to be constricted. The first direct observation of arterioles’ rapid vasoconstriction due to acute hyperglycemia and spontaneous recovery in this study would be used as meaningful evidence to study the effects of BG level on the cardiovascular system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence molecular tomography (FMT) has gained prominence in recent years as a viable optical imaging technique for non-invasive, high-sensitivity, tomographic imaging of the brain. While optical imaging methods have demonstrated promising results for quantitative imaging of functional changes in the brain, they are still limited in their abilities to achieve high spatial and temporal resolution. To address these challenges, we present here a deep learning solution for FMT reconstructions, which implements a neural network with our novel asymptotic sparse function from our previously introduced sensitivity equation-based non-iterative sparse optical reconstruction (SENSOR) code to achieve highresolution and sparse reconstructions using only learned parameters. We evaluate the proposed network through numerical phantom experiments. Furthermore, once the network is trained, the total reconstruction time is independent of the number of sources and wavelengths used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Learning-based compressed sensing algorithms are popularly used for recovering the underlying datacube of snapshot compressive temporal imaging (SCTI), which is a novel technique for recording temporal data in a single exposure. Despite providing fast processing and high reconstruction performance, most deep-learning approaches are merely considered a substitute for analytical-modeling-based reconstruction methods. In addition, these methods often presume the ideal behaviors of optical instruments neglecting any deviation in the encoding and shearing processes. Consequently, these approaches provide little feedback to evaluate SCTI’s hardware performance, which limits the quality and robustness of reconstruction. To overcome these limitations, we develop a new end-to-end convolutional neural network—termed the deep high-dimensional adaptive net (D-HAN)—that provides multi-faceted process-aware supervision to an SCTI system. The D-HAN includes three joint stages: four dense layers for shearing estimation, a set of parallel layers emulating the closed-form solution of SCTI’s inverse problem, and a U-net structure that works as a filtering step. In system design, the D-HAN optimizes the coded aperture and establishes SCTI’s sensing geometry. In image reconstruction, D-HAN senses the shearing operation and retrieves a three-dimensional scene. D-HAN-supervised SCTI is experimentally validated using compressed optical-streaking ultrahigh-speed photography to image the animation of a rotating spinner at an imaging speed of 20 thousand frames per second. The D-HAN is expected to improve the reliability and stability of a variety of snapshot compressive imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rise in visual dataset generation has necessitated the recent advancements in the field of Deep Neural Networks (DNNs). Application domains like biomedical imaging require a high level of precision which is suitably achieved using convolutional neural networks (CNNs) at the expense of increased computation, hardware, and power resources. The implementation of such CNN architectures is constrained by currently available resource limited embedded and application-specific integrated circuit (ASIC) systems. In this work, a field-programmable gate array (FPGA) based hardware accelerator having a generalized architecture for convolution and fully connected (FC) layers has been presented that exploits a massive level of intra-layer parallelism. Compute intensive convolution layers are replaced by depthwise separable (DS) convolution layers that reduced the number of computations and memory access by 7.8x and 10x respectively for VGG8 network after detailed design space exploration. Furthermore, parallel computation of arithmetic tasks reduced the compute bound for the proposed architecture. Reduced precision data type for both input and weights resulted in overall reduction in latency and resource utilization. FPGA implementation results of the proposed CNN accelerator for classifiers trained on subsets of MedMNIST dataset depict a balance between high performance of 214.5 GOP/s for DS convolution layer and low resource utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.