The classification of brain tissue plays a key role in brain tumor diagnosis and treatment. It revolves around post-surgical histochemical staining, which is often time consuming and delays follow up treatment. Identifying tumor boarders during tumor resection is essential for an efficient therapy minimizing removed healthy tissue and maximizing removed tumor tissue. The different approaches in use are either expensive and time consuming or limited to certain tumor types. We propose a real-time in vivo label free classification approach, applicable for both demands. Based on autofluorescence properties, a label-free differentiation between tissue types is possible. Therefore, a multicore fiber (MCF) based endoscope is designed to fit into biopsy needles used during diagnosis and to be used as a handheld probe during tumor resection. It allows illuminating and imaging through the same MCF, minimizing the endoscope to a submillimeter diameter. Currently, autofluorescence images are not used in pathology. Thus, medical doctors cannot interpret them. We use a neural network for diagnosis, bridging this gap. One problem with neural networks in medical applications is data availability for training. Different techniques are investigated to maximize the classification performance with a limited training dataset. Cascaded neural networks in combination with digital twins improve the results while lowering the needed training dataset size. The preliminary data indicates that our technology might lead to a paradigm shift in brain tumor diagnosis and therapy due to the accurate result, the versatile design, and being low-cost.
SignificanceDeep learning enables label-free all-optical biopsies and automated tissue classification. Endoscopic systems provide intraoperative diagnostics to deep tissue and speed up treatment without harmful tissue removal. However, conventional multi-core fiber (MCF) endoscopes suffer from low resolution and artifacts, which hinder tumor diagnostics.AimWe introduce a method to enable unpixelated, high-resolution tumor imaging through a given MCF with a diameter of around 0.65 mm and arbitrary core arrangement and inhomogeneous transmissivity.ApproachImage reconstruction is based on deep learning and the digital twin concept of the single-reference-based simulation with inhomogeneous optical properties of MCF and transfer learning on a small experimental dataset of biological tissue. The reference provided physical information about the MCF during the training processes.ResultsFor the simulated data, hallucination caused by the MCF inhomogeneity was eliminated, and the averaged peak signal-to-noise ratio and structural similarity were increased from 11.2 dB and 0.20 to 23.4 dB and 0.74, respectively. By transfer learning, the metrics of independent test images experimentally acquired on glioblastoma tissue ex vivo can reach up to 31.6 dB and 0.97 with 14 fps computing speed.ConclusionsWith the proposed approach, a single reference image was required in the pre-training stage and laborious acquisition of training data was bypassed. Validation on glioblastoma cryosections with transfer learning on only 50 image pairs showed the capability for high-resolution deep tissue retrieval and high clinical feasibility.
An end-to-end tumor diagnosis framework including resolution enhancement and tumor classification is proposed. The U-Net + EDSR network enables a significant improvement of PSNR and enhances the resolution beyond physical limitations. Moreover, the subsequent tumor discrimination can benefit from the enhancement. Multi-image as network input and advanced models like generative adversarial networks are expected to bring a further improvement for the imaging. Our proposed novel method first time realizes intraoperative lensless CFB imaging with high resolution in the near-field. The technique builds a bridge to many techniques like optical biopsies, multi-modal imaging, virtual staining, and computer-assisted disease diagnostics for neuron signal monitoring as well as neurosurgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.