Up to 30% of breast-conserving surgery patients require secondary surgery to remove cancerous tissue missed in the initial intervention. We hypothesize that tracked tissue sensing can improve the success rate of breast-conserving surgery. Tissue sensor tracking allows the surgeon to intraoperatively scan the tumor bed for leftover cancerous tissue. In this study, we characterize the performance of our tracked optical scanning testbed using an experimental pipeline. We assess the Dice similarity coefficient, accuracy, and latency of the testbed.
Up to 40% of Breast Conserving Surgery (BCS) patients must undergo repeat surgery because cancer is left behind in the resection cavity. The mobility of the breast resection cavity makes it difficult to localize residual cancer and, therefore, cavity shaving is a common technique for cancer removal. Cavity shaving involves removing an additional layer of tissue from the entire resection cavity, often resulting in unnecessary healthy tissue loss. In this study, we demonstrated a navigation system and open-source software module that facilitates visualization of the breast resection cavity for targeted localization of residual cancer.
PURPOSE: Over 30% of breast conserving surgery patients must undergo repeat surgery to address incomplete tumor resection. We hypothesize that the addition of a robotic cavity scanning system can improve the success rates of these procedures by performing additional, intraoperative imaging to detect left-over cancer cells. In this study, we assess the feasibility of a combined optical and acoustic imaging approach for this cavity scanning system. METHODS: Dual-layer tissue phantoms are imaged with both throughput broadband spectroscopy and an endocavity ultrasound probe. The absorbance and transmittance of the incident light from the broadband source is used to characterize each tissue sample optically. Additionally, a temporally enhanced ultrasound approach is used to distinguish the heterogeneity of the tissue sample by classifying individual pixels in the ultrasound image with a support vector machine. The goal of this combined approach is to use optical characterization to classify the tissue surface, and acoustic characterization to classify the sample heterogeneity. RESULTS: Both optical and acoustic characterization demonstrated promising preliminary results. The class of each tissue sample is distinctly separable based on the transmittance and absorption of the broadband light. Additionally, an SVM trained on the temporally enhance ultrasound signals for each tissue type, showed 82% linear separability of labelled temporally enhanced ultrasound sequences in our test set. CONCLUSIONS: By combining broadband and ultrasound imaging, we demonstrate a potential non-destructive imaging approach for this robotic cavity scanning system. With this approach, our system can detect both surface level tissue characteristics and depth information. Applying this to breast conserving surgery can help inform the surgeon about the tissue composition of the resection cavity after initial tumor resection.
PURPOSE: Basal Cell Carcinoma (BCC) is the most common cancer in the world. Surgery is the standard treatment and margin assessment is used to evaluate the outcome. The presence of cancerous cells at the edge of resected tissue i.e., positive margin, can negatively impact patient outcomes and increase the probability of cancer recurrence. Novel mass spectrometry technologies paired with machine learning can provide surgeons with real-time feedback about margins to eliminate the need for resurgery. To our knowledge, this is the first study to report the performance of cancer detection using Graph Convolutional Networks (GCN) on mass spectrometry data from resected BCC samples. METHODS: The dataset used in this study is a subset of an ongoing clinical data acquired by our group and annotated with the help of a trained pathologist. There is a total number of 190 spectra in this dataset, including 127 normal and 63 BCC samples. We propose single-layer and multi-layer conversion methods to represent each mass spectrum as a structured graph. The graph classifier is developed based on the deep GCN structure to distinguish between cancer and normal spectra. The results are compared with the state of the art in mass spectra analysis. RESULTS: The classification performance of GCN with multi-layer representation without any data augmentation is comparable to the previous studies that have used augmentation. CONCLUSION: The results indicate the capability of the proposed graph-based analysis of mass spectrometry data for tissue characterization or real-time margin assessment during cancer surgery.
PURPOSE: Raman spectroscopy is an optical imaging technique used to characterize tissue via molecular analysis. The use of Raman spectroscopy for real-time intraoperative tissue classification requires fast analysis with minimal human intervention. In order to have accurate predictions and classifications, a large and reliable database of tissue classifications with spectra results is required. We have developed a system that can be used to generate an efficient scanning path for robotic scanning of tissues using Raman spectroscopy. METHODS: A camera mounted to a robotic controller is used to take an image of a tissue slide. The corners of the tissue slides within the sample image are identified, and the size of the slide is calculated. The image is cropped to fit the size of the slide and the image is manipulated to identify the tissue contour. A grid set to fit around the size of the tissue is calculated and a grid scanning pattern is generated. A masked image of the tissue contour is used to create a scanning pattern containing only the tissue. The tissue scanning pattern points are transformed to the robot controller coordinate system and used for robotic tissue scanning. The pattern is validated using spectroscopic scans of the tissue sample. The run time of the tissue scan pattern is compared to a region of interest scanning pattern encapsulating the tissue using the robotic controller. RESULTS: The average scanning time for the tissue scanning pattern compared to region of interest scanning reduced by 4 minutes and 58 seconds. CONCLUSION: This method reduced the number of points used for automated robotic scanning, and can be used to reduce scanning time and unusable data points to improve data collection efficiency.
PURPOSE: The iKnife is a new surgical tool designed to aid in tumor resection procedures by providing enriched chemical feedback about the tumor resection cavity from electrosurgical vapors. We build and compare machine learning classifiers that are capable of distinguishing primary cancer from surrounding tissue at different stages of tumor progression. In developing our classification framework, we implement feature reduction and recognition tools that will assist in the translation of xenograft studies to clinical application and compare these tools to standard linear methods that have been previously demonstrated. METHODS: Two cohorts (n=6 each) of 12 week old female immunocompromised (Rag2−/−;Il2rg−/−) mice were injected with the same human breast adenocarcinoma (MDA-MB-231) cell line. At 4 and 6 weeks after cell injection, mice in each cohort were respectively euthanized, followed by iKnife burns performed on tumors and tissues prior to sample collection for future studies. A feature reduction technique that uses a neural network is compared to traditional linear analysis. For each method, we fit a classifier to distinguish primary cancer from surrounding tissue. RESULTS: Both classifiers can distinguish primary cancer from metastasis and surrounding tissue. The classifier that uses a neural network achieves an accuracy of 96.8% and the classifier without the neural network achieves an accuracy of 96%. CONCLUSIONS: The performance of these classifiers indicate that this device has the potential to offer real-time, intraoperative classification of tissue. This technology may be used to assist in intraoperative margin detection and inform surgical decisions to offer a better standard of care for cancer patients.
PURPOSE: Raman Spectroscopy is amongst several optical imaging techniques that have the ability to characterize tissue non-invasively. To use these technologies for intraoperative tissue classification, fast and efficient analysis of optical data is required with minimal operator intervention. Additionally, there is a need for a reliable database of optical signatures to account for variable conditions. We developed a software system with an inexpensive, flexible mechanical framework to facilitate automated scanning of tissue and validate spectroscopic scans with histologic ground truths. This system will be used, in the future, to train a machine learning algorithm to distinguish between different tissue types using Raman Spectroscopy. METHODS: A sample of chicken breast tissue is mounted to a microscope slide following a biopsy of fresh frozen tissue. Landmarks for registration and evaluation are marked on the specimen using a material that is recognizable in both spectroscopic and histologic analysis. The slides are optically analyzed using our software. The landmark locations are extraction from the spectroscopic scan of the specimen using our software. This information is then compared to the landmark locations extracted from images of the slide using the software, ImageJ. RESULTS: Target registration error of our system in comparison to ImageJ was found to be within 1.1 mm in both x and y directions. CONCLUSION: We demonstrated a system that can employ accurate spectroscopic scans of fixed tissue samples. This system can be used to spectroscopically scan tissue and validate the results with histology images in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.