KEYWORDS: Denoising, Signal to noise ratio, Video, Spatial resolution, Neurophotonics, Education and training, Spatial learning, Performance modeling, Data modeling, Neuroimaging
SignificanceVoltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground-truth but usually suffer from the trade-off between spatial and temporal performances.AimWe present DeepVID v2, a self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging.ApproachDeepVID v2 is built on our original DeepVID framework, which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. Similar to DeepVID, the network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. In addition, DeepVID v2 introduces a new spatial prior extraction branch to capture fine structural details to learn high spatial resolution information. Two variants of DeepVID v2 are introduced to meet specific denoising needs: an online version tailored for real-time inference with a limited number of frames and an offline version designed to leverage the full dataset, achieving optimal temporal and spatial performances.ResultsWe demonstrate that DeepVID v2 is able to overcome the trade-off between spatial and temporal performances and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 can generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios and extreme low-photon conditions.ConclusionsOur results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.
We present DeepVIDv2, a resolution-improved self-supervised voltage imaging denoising approach that achieves higher spatial resolution while preserving fast neuronal dynamics. While existing methods enhance signal-to-noise ratio (SNR), they compromise spatial resolution and result in blurry outputs. By disentangling spatial and temporal performance into two parameters, DeepVIDv2 overcomes the tradeoff faced by its predecessor. This advancement enables more effective analysis of high-speed, large-population voltage imaging data.
High-speed low-light two-photon voltage imaging is an emerging tool to simultaneously monitor neuronal activity from a large number of neurons. However, shot noise dominates pixel-wise measurements and the neuronal signals are difficult to be identified in the single-frame raw measurement. We developed a self-supervised deep learning framework for voltage imaging denoising, DeepVID, without the need for any high-SNR measurements. DeepVID infers the underlying fluorescence signal based on independent temporal and spatial statistics of the measurement that is attributable to shot noise. DeepVID achieved a 15-fold improvement in SNR when comparing denoised and raw image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.