Virtual staining creates H&E-like images with minimal tissue processing. Typically, two channels are used, but single-channel staining is attractive for techniques like reflectance confocal microscopy (RCM). Our study trains a deep learning model to generate H&E images from single-channel RCM using pixel-level registration. Porcine skin was stained with acridine orange, SR101, and aluminum chloride, and confocal microscopy images were acquired. Using pix2pixGAN, we trained the model on grayscale RCM images, producing virtual stained images that closely resembled the ground truth. We showed some model output examples and used image assessment metrics to evaluate model performance. This technique has potential for in vivo surgical applications, eliminating the need for image registration.
Significance: Raman spectroscopy (RS) provides an automated approach for assisting Mohs micrographic surgery for skin cancer diagnosis; however, the specificity of RS is limited by the high spectral similarity between tumors and normal tissues structures. Reflectance confocal microscopy (RCM) provides morphological and cytological details by which many features of epidermis and hair follicles can be readily identified. Combining RS with deep-learning-aided RCM has the potential to improve the diagnostic accuracy of RS in an automated fashion, without requiring additional input from the clinician.
Aim: The aim of this study is to improve the specificity of RS for detecting basal cell carcinoma (BCC) using an artificial neural network trained on RCM images to identify false positive normal skin structures (hair follicles and epidermis).
Approach: Our approach was to build a two-step classification model. In the first step, a Raman biophysical model that was used in prior work classified BCC tumors from normal tissue structures with high sensitivity. In the second step, 191 RCM images were collected from the same site as the Raman data and served as inputs for two ResNet50 networks. The networks selected the hair structure and epidermis images, respectively, within all images corresponding to the positive predictions of the Raman biophysical model with high specificity. The specificity of the BCC biophysical model was improved by moving the Raman spectra corresponding to these selected images from false positive to true negative.
Results: Deep-learning trained on RCM images removed 52% of false positive predictions from the Raman biophysical model result while maintaining a sensitivity of 100%. The specificity was improved from 84.2% using Raman spectra alone to 92.4% by integrating Raman spectra with RCM images.
Conclusions: Combining RS with deep-learning-aided RCM imaging is a promising tool for guiding tumor resection surgery.
Significance: Sub-diffuse optical properties may serve as useful cancer biomarkers, and wide-field heatmaps of these properties could aid physicians in identifying cancerous tissue. Sub-diffuse spatial frequency domain imaging (sd-SFDI) can reveal such wide-field maps, but the current time cost of experimentally validated methods for rendering these heatmaps precludes this technology from potential real-time applications.
Aim: Our study renders heatmaps of sub-diffuse optical properties from experimental sd-SFDI images in real time and reports these properties for cancerous and normal skin tissue subtypes.
Approach: A phase function sampling method was used to simulate sd-SFDI spectra over a wide range of optical properties. A machine learning model trained on these simulations and tested on tissue phantoms was used to render sub-diffuse optical property heatmaps from sd-SFDI images of cancerous and normal skin tissue.
Results: The model accurately rendered heatmaps from experimental sd-SFDI images in real time. In addition, heatmaps of a small number of tissue samples are presented to inform hypotheses on sub-diffuse optical property differences across skin tissue subtypes.
Conclusion: These results bring the overall process of sd-SFDI a fundamental step closer to real-time speeds and set a foundation for future real-time medical applications of sd-SFDI such as image guided surgery.
Analyzing Spatial Frequency Domain Images (SFDI) of tissue in the sub-diffuse domain can reveal optical properties (μs’, γ) of the tissue related to its microstructural composition and shows potential for use in image-guided cancer removal. However, the determination of sub-diffuse optical properties is currently too slow for real-time applications. Recent research has demonstrated the real-time determination of these properties from experimental measurements using machine learning models, but the γ range of these models falls short of the full spectrum of γ values seen in biological tissue, limited by the range of the simulated datasets used to train these models. The Gegenbauer Kernel has previously been employed in SFDI simulations and been show to allow for simulations across an expanded γ range. Models trained on these simulations have shown success in simulation. We present a novel method which translates γ into analogous parameters of the Gegenbauer Kernel and uses this kernel to simulate datasets over an expanded range of γ values. We train a machine learning model on these datasets and use it to render sub-diffuse optical property heat maps from experimental data of tissue-simulating phantoms and ex vivo skin surgical samples across a full range of values in real-time. We compare this method against the current non-linear fit method and show a significant increase in speed with comparable accuracy. These findings enable real-time rendering of sub-diffuse SFDI for potential use within an image-guided surgery system.
Adequate tumor margin delineation is crucial to maximize positive patient outcomes in molecular-guided surgery. Raman spectroscopy is highly specific in detecting tumor margins based on the differences in molecular composition between tumor and normal tissue; however, one major technical hurdle to its adoption is its slow acquisition speed. Previously, we described a "superpixel" acquisition approach that can expedite up to 10,000x compared to point-bypoint scanning while covering the entire surface area. We detected human basal cell carcinoma in Mohs surgical resection margins from eight patients and demonstrated superpixel acquisition had consistent diagnostic performance with point-by-point scanning. In this work, we further demonstrated examples of raster-scanned superpixel Raman classification images of positive and negative margins from three new patients. The performance of three superpixel sizes were evaluated, including 25×25μm2, 50×50μm2 and 100×100μm2. A previous established biophysical inverse model was applied to extract the biochemical composition of each superpixel, and a prior classification model was employed to generate the tumor heatmap. The classification result was then compared with the histopathological image. Our results show that superpixel Raman imaging can overcome the limitation of traditional Raman imaging in speed, allowing for rapid tumor margin assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.