We present DeepVIDv2, a resolution-improved self-supervised voltage imaging denoising approach that achieves higher spatial resolution while preserving fast neuronal dynamics. While existing methods enhance signal-to-noise ratio (SNR), they compromise spatial resolution and result in blurry outputs. By disentangling spatial and temporal performance into two parameters, DeepVIDv2 overcomes the tradeoff faced by its predecessor. This advancement enables more effective analysis of high-speed, large-population voltage imaging data.
SignificanceFluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze in-vivo neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses.AimWe present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8 × between twin foci in fixed scattering samples.ApproachWe use a genetic algorithm that considers the GRIN lens’ aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 μm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight.ResultsWe characterize the performance of EDoF-Miniscope across 5- and 10-μm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm-thick mouse brain sample and vessels in a whole mouse brain sample.ConclusionsBuilt from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.
We demonstrate an extended-depth-of-field miniscope (EDoF-Miniscope) which utilizes an optimized binary diffractive optical element (DOE) to achieve a 2.8x axial elongation in twin foci when integrated on the pupil plane. We optimize our DOE through a genetic algorithm, which utilizes a Fourier optics forward model to consider the native aberrations of the primary gradient refractive index (GRIN) lens, optical property of the submersion media, the geometric effects of the target fluorescent sources and axial intensity loss from tissue scattering to create a robust EDoF. We demonstrate that our platform achieves high contrast signals that can be recovered through a simple filter across 5-μm and 10-μm beads embedded in scattering phantoms, and fixed mouse brain samples.
High-speed low-light two-photon voltage imaging is an emerging tool to simultaneously monitor neuronal activity from a large number of neurons. However, shot noise dominates pixel-wise measurements and the neuronal signals are difficult to be identified in the single-frame raw measurement. We developed a self-supervised deep learning framework for voltage imaging denoising, DeepVID, without the need for any high-SNR measurements. DeepVID infers the underlying fluorescence signal based on independent temporal and spatial statistics of the measurement that is attributable to shot noise. DeepVID achieved a 15-fold improvement in SNR when comparing denoised and raw image data.
We present a Computational Miniature Mesoscope that enables 3D fluorescence imaging across an 8-mm field-of-view and 2.5-mm depth-of-field in a single shot, achieving 7-micrometer lateral resolution and better than 200-micrometer axial resolution. The mesoscope has a compact design that integrates a microlens array for imaging and an LED array for excitation on a single platform. Its expanded imaging capability is enabled by computational imaging. We experimentally validate the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers. We further quantify the effects of bulk scattering and background fluorescence on phantom experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.