PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12368, including the Title Page, Copyright information, Table of Contents and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality (AR) image tracking may be used in AR-guided surgical applications for real-time guidance and quantitative feedback. With AR-guided applications allowing for broader accessibility compared to specialized systems used in traditional surgical image-guidance, we evaluated the measurement errors of monocular AR image tracking against current gold standard infrared optical and electromagnetic (EM) tracking. A measurement stylus was designed and 3D printed, allowing for monocular AR image tracking using a Logitech C920 camera, infrared optical tracking with Northern Digital Inc. (NDI) Vicra, and EM tracking with NDI Aurora through corresponding sensor attachments. A measurement phantom was also designed and 3D printed, consisting of 3 measurement planes with 81 measurement points in each plane, totaling 243 measurement points across a 16 cm x 16 cm x 18 cm measurement volume. Pivot calibration was performed using random sample consensus (RANSAC) sphere fitting to calculate the offsets between sensor attachments to stylus tip across each tracking system. Measurements of the stylus tip were collected across the measurement phantom for each tracking system. Each system’s fiducial registration error was quantified using the collected tip positions through rigid registration between the tracking system and the designed phantom points from CAD. Fiducial registration errors were 1.19 mm, 0.59 mm, and 0.51 mm for monocular AR, infrared optical, and EM tracking. Monocular AR image tracking presents a cost effective and accessible solution for surgical guidance applications. Errors close to 1 mm may be suitable for scenarios such as surgical simulators in competency-based education and AR-based planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In-situ identification of glioma subtype can enable modifications of clinical and surgical strategies. Particularly, astrocytoma benefit from more aggressive resection than oligodendroglioma, which have a more favorable response to post-surgical chemotherapy. Preoperative MRI and intraoperative histology cannot accurately determine glioma subtype. There is a need for real-time identification of adult-type diffuse glioma subtypes to aid the neurosurgeon’s decision-making during resection surgery. Fluorescence lifetime imaging (FLIm) where tissue autofluorescence can be used as an indicator to distinguish among brain tumor tissue types in real-time could aid this process. Here, we report the use of label-free FLIm in distinguishing IDH-mutant glioma subtypes (astrocytoma and oligodendroglioma). The FLIm system (excitation: 355 nm; emission bands: 390/40 nm, 470/28 nm, 542/50 nm) was used to scan brain tissue from the resection margins of glioma patients during tumor resection. Fluorescence lifetimes were extracted and analyzed by constrained least-squares deconvolution with the Laguerre expansion method. FLIm data was validated with histopathology of collected biopsies. Current results show that FLIm provides optical contrast between tumor and healthy white matter, and between IDH-mutant astrocytoma (N=7 patients) and oligodendroglioma (N=5 patients). Tumors showed shorter lifetime values (470-nm: 3.6±0.6ns; 542-nm: 3.3±0.7ns) than healthy white matter (470-nm: 4.6±0.4ns; 542-nm: 4.3±0.5ns, p<0.01). Oligodendroglioma had shorter lifetimes in the 470-nm (3.3±0.1ns) and 542-nm (2.8±0.2ns) channels, which are associated with NAD(P)H and FAD fluorescence respectively, when compared with IDH-mutant astrocytoma (470-nm: 4.1±0.1ns; 542-nm: 3.9±0.2ns, p<0.01). Together, these results demonstrate the feasibility of using FLIm as an intraoperative tool in glioma diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser Speckle Contrast Imaging (LSCI) has emerged as a promising imaging modality that offers full field, real time, continuous and agent free monitoring of cerebral blood flow during neurosurgery. Since LSCI does not require the injection of a contrast agent, it has the potential to complement fluorescence-based modalities by providing continuous and dynamic changes in blood flow during critical moments of neurosurgery. We performed a clinical study with LSCI to investigate the clinical utility of the technique intraoperatively. A commercially available Zeiss Pentero 900® microscope was equipped with a λ=785nm laser diode attached to a customized mount. The backscattered laser light was collected by the microscope, producing a laser speckle image on the external camera which was mounted on the microscope side-port. Custom software collected laser speckle images, computed, and then displayed the speckle contrast images in real time throughout the surgery onto the operating room monitors. The images were displayed with custom color maps and thresholding. The robust integration in the surgical workflow of the technology enabled the investigation of the need for dynamic vessel-flow characterization in 20-patients at the Inselspital in Bern, Switzerland. We assessed vessel flow during key time points in the surgery and provided real time and continuous measurements to the surgeon.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ill-defined tumor borders of glioblastoma multiforme pose a major challenge for the surgeon during tumor resection, since the goal of the tumor resection is the complete removal, while saving as much healthy brain tissue as possible. In recent years, optical coherence tomography (OCT) was successfully used to classify white matter from tumor infiltrated white matter by several research groups. Motivated by these results, a dataset was created, which consisted of sets of corresponding ex vivo OCT images, which were acquired by two OCT-systems with different properties (e.g. wavelength and resolution). Each image was annotated with semantic labels. The labels differentiate between white and gray matter and three different stages of tumor infiltration. The data from both systems not only allowed a comparison of the ability of a system to identify the different tissue types present during the tumor resection, but also enable a multimodal tissue analysis evaluating corresponding OCT images of the two systems simultaneously. A convolutional neural network with dirichlet prior was trained, which allowed to capture the uncertainty of a prediction. The approach increased the sensitivity of identifying tumor infiltration from 58 % to 78 % for data with a low prediction uncertainty compared to a previous monomodal approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Positive margin status after breast-conserving surgery (BCS) is a predictor of higher rates of local recurrence. Intraoperative margin assessment aims to achieve negative surgical margin status at the first operation, thus reducing the re-excision rates that are usually associated with potential surgical complications, increased medical costs, and mental pressure on patients. Microscopy with ultraviolet surface excitation (MUSE) can rapidly image tissue surfaces with subcellular resolution and sharp contrasts by utilizing the nature of the thin optical sectioning thickness of deep ultraviolet light. We have previously imaged 66 fresh human breast specimens that were topically stained with propidium iodide and eosin Y using a customized MUSE system. To achieve objective and automated assessment of MUSE images, a machine learning model is developed for binary (tumor vs. normal) classification of obtained MUSE images. Features extracted by texture analysis and pre-trained convolutional neural networks (CNN) have been investigated for sample descriptions. A sensitivity, specificity, and accuracy better than 90% have been achieved for detecting tumorous specimens. The result suggests the potential of MUSE with machine learning being utilized for intraoperative margin assessment during BCS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a handheld single photon avalanche diode (SPAD) micro-camera probe for wide-field in-vivo fluorescence lifetime imaging (FLIm) applications. The presented probe includes a novel 3D stacked 1.4 mm × 1.4 mm SPAD array, an integrated excitation light source, and imaging optics. The spatial and temporal performance of the integrated system was characterised using a USAF test target and range of fluorescence lifetime beads.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence lifetime imaging (FLIM) is a valuable technique which can be used to provide label free contrast between different tissue types and provide information about their molecular makeup and local environment. FLIM systems based on single photon avalanche diode (SPAD) arrays are increasingly being used in applications such as medical imaging due to their high sensitivity and excellent temporal resolution [1]. Additionally, SPAD arrays are also commonly employed for time of flight (ToF) imaging techniques such as light detection and ranging (LiDAR) [2]. Here we demonstrate a system which employs both of these modalities into a single instrument, allowing us to acquire both depth and widefield FLIM images simultaneously using a single 32 x 32 pixel SPAD array operating in time correlated single-photon counting (TCSPC) mode with 50 ps temporal resolution. Initial results show that we can correctly measure depths and distances of sample objects with < 1 cm resolution while maintaining excellent and consistent fluorescence contrast. Lifetime is consistent over a distance of 10 cm with a standard deviation of < 0.5 ns, showing that it is possible to decouple depth and lifetime data. We believe this work is the first demonstration of a widefield FLIM system capable of 3D imaging. The next step will be the addition of a miniaturized system [1] and future applications for this technology include fields such as surgical guidance, endoscopy and diagnostic imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During thyroid surgery, parathyroid glands may be accidentally extracted due to their similar shapes and colors to the surrounding tissues (lymph nodes, fat, and thyroid tissue). In order to avoid damaging or resecting vulnerable glands, we aim to assist surgeons to better identify the parathyroids with real-time bounding boxes on a screen available in operating rooms. Parathyroids are auto fluorescent when excited with near infrared (NIR) light; therefore, videos recorded simultaneously in NIR, and RGB color formats can be used to train a deep learning model for robust object detection and localization without the need for expert annotation. The use of NIR images facilitates the generation of the ground truth dataset. We collected 16 patients' videos during total thyroidectomy. The videos were initially decomposed into a series of images taken at every 10 frames. From this, an intensity threshold was applied on the NIR images creating newer images where the parathyroid can be easily selected. Using these images, ground truth bounding boxes were generated. Our ground truth database size was over 600 images, of which 540 images contained parathyroid glands and 66 did not. We ran Faster R-CNN twice, initially to perform localization using the images with parathyroids only. The second method was to perform classification using the entire dataset. For the first method, we achieved an average intersection over union of 85% and for second, we obtained a precision of 98% and a recall of 100%. Given the limited dataset we are very excited with these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Osteoporosis is a disease that weakens bones increasing the possibility of bone fracture. The gold standard to diagnose osteoporosis is measuring bone mineral density (BMD). Since BMD only partly determines the strength of the bone, more information on chemical composition and microstructure is needed. Here, we implemented a novel dual-wavelength inverse Spatially Offset Raman Spectroscopy (SORS) to characterize tissue chemical composition covering both the fingerprint and high-wavenumber regions. This system provides a greater probing depth keeping the spectrometer setting constant. The results from hydroxyapatite (HA) and water phantom demonstrate the potential of the Raman system to assess bone mineral and matrix quality in-vivo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we aimed to develop a new optical biopsy technique for aganglionosis of Hirschsprung disease (HSCR) and we then evaluated a custom designed Raman optical biopsy system combined with deep learning based on convolutional neural networks (CNNs). Surgical specimens of formalin-fixed tissue of HSCR patients were subjected to this study. In the result, we achieved more than 90% classification accuracy between the normal and the lesion segments in mucosa. This study shows that CNN is useful for discriminating Raman spectra of the human gastrointestinal wall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the current study is to evaluate the classification accuracy and provide corresponding biological interpretation of four classification methods used on autofluorescence (AF) and diffuse reflectance (DR) spectra acquired in vivo on healthy human skin of different phototypes, civil and apparent age groups. Spectroscopic data were acquired on 91 patients using the SpectroLive device. The latter spatially and spectrally-resolved device features four source-to-detector distances (D1-D4) and six excitation light sources: 5 peaks for AF and one broadband white light for DR. For all patients, spectra were acquired on two healthy skin sites i.e. hand palm and inner wrist chosen for their low sun exposure. Four classification methods were tested: Support Vector Machine, K-Nearest Neighbors, Linear Discriminant Analysis and Artificial Neural Network. All combinations of excitation wavelengths, distances and skin sites acquisition were tested to find out the best classification results following a training step on 67 % of the dataset and a validation step on 33 % of the dataset. Classification accuracies were compared using Principal Components Analysis and statistical features. For civil and biological skin age groups discrimination, best classification results (70 % and 76 % respectively) were obtained when combining autofluorescence spectral features from three excitation wavelengths (385, 395 and 405 nm) all acquired at the shortest distance (400 µm) on hand palm. The combination of AF, inner wrist and the longest distance (1 mm) gave the best classification results (76 %) for phototype groups discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Label-free tissue identification is the new frontier of image guided surgery. One of the most promising modalities is hyperspectral imaging (HSI). Until now, the use of HSI has, however, been limited due to the challenges of integration into the existing clinical workflow. Research to reduce the implementation effort and simplifying the clinical approval procedure is ongoing, especially for the acquisition of feasibility datasets to evaluate HSI methods for specific clinical applications. Here, we successfully demonstrate how an HSI system can interface with a clinically approved surgical microscope making use of the microscope’s existing optics. We outline the HSI system adaptations, the data pre-processing methods, perform a spectral and functional system level validation and integration into the clinical workflow. Data were acquired using an imec snapscan VNIR 150 camera enabling hyperspectral measurement in 150 channels in the 470-900 nm range, assembled on a ZEISS OPMI Pentero 900 surgical microscope. The spectral range of the camera was adapted to match the intrinsic illumination of the microscope resulting in 104 channels in the range of 470-787 nm. The system’s spectral performance was validated using reflectance wavelength calibration standards. We integrated the HSI system into the clinical workflow of a brain surgery, specifically for resections of low-grade gliomas (LGG). During the study, but out of scope of this paper, the acquired dataset was used to train an AI algorithm to successfully detect LGG in unseen data. Furthermore, dominant spectral channels were identified enabling the future development of a real-time surgical guidance system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particles in biopharmaceutical products present high risks due to their detrimental impacts on product quality and safety. Quantification and identification of particles in drug products are important to understand particle formation mechanisms, which could help control strategy development on particle formation during formulation development and manufacturing process. However, existing analytical techniques such as MFI and HIAC lack the sensitivity and resolution to detect particles with sizes smaller than 2 μm. More importantly, there is no chemical information available for determining particle content. In this work, we develop a Stimulated Raman Scattering (SRS) microscopy technique that overcomes these challenges by monitoring the C-H Raman stretching modes of the proteinaceous particles and a common contaminant (silicone oil). By comparing the relative signal intensity and spectral features of each component, most particles can be classified as protein, protein-silicone oil, or silicone oil. Our method has the capacity to quantify aggregation in protein therapeutics with chemical and spatial information in a label-free manner, potentially allowing high throughput screening or investigation of aggregation mechanisms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical coherence elastography (OCE) offers the possibility of obtaining the mechanical behavior of a tissue. When also using a non-contact mechanical excitation, it mimics palpation without interobserver variability. One of the most frequently used techniques is phase-sensitive OCE. Depending on the system, depth-resolved changes in the sub-µm to nm range can be detected and visualized volumetrically. Such an approach is used in this work to investigate and detect transitions between healthy and tumorous brain tissue as well as inhomogeneities in the tumor itself to assist the operating surgeon during tumor resection in the future. We present time-resolved, phase-sensitive OCE measurements on various ex vivo brain tumor samples using an ultra-fast 3.2 MHz swept-source optical coherence tomography (SS-OCT) system with a frame rate of 2.45 kHz. 4 mm line scans are acquired which, in combination with the high imaging speed, allow monitoring and investigation of the sample's behavior in response to the mechanical load. Therefore, an air-jet system applies a 200 ms short air pulse to the sample, whose non-contact property facilitates the possibility for future in vivo measurements. Since we can temporally resolve the response of the sample over the entire acquisition time, the mechanical properties are evaluated at different time points with depth resolution. This is done by unwrapping the phase data and performing subsequent assessment. Systematic ex vivo brain tumor measurements were conducted and visualized as distribution maps. The study outcomes are supported by histological analyses and examined in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Corneal topographic imaging using terahertz (THz) radiation is a novel diagnostic tool for several ophthalmological conditions. While techniques such as OCT are excellent at measuring corneal thickness, they do not provide hydration information. However, because THz spectroscopy is highly sensitive to water absorption, it is an ideal candidate for topographical mapping of tissue hydration gradients. For imaging a corneal sphere with an 8 mm radius of curvature, previous studies have used a collocated system and raster-scanned a collimated THz beam along the aperture of an off axis parabolic mirror (OAPM) to focus the beam with normal incidence onto a hemispherical target. Aside from alignment difficulty, an OAPM provides an asymmetric field-of-view (FOV) and scanning the collimated beam over the large aperture takes several minutes. Here, we propose a new double hyperbolic-elliptical-lens imaging system to achieve a larger and symmetrical FOV in a significantly shorter scan time. Using Nelder-Mead optimization and ray-tracing simulations to determine the aspheric surfaces of the lenses, a large FOV of 9 mm can be achieved on an 8 mm radius target with a high degree of phase-front matching ensuring normal incidence on the entire curved surface of the target. Additionally, we demonstrate a telecentric beam-steering system using a heliostat configuration which greatly reduces the imaging time to a few seconds (~4 seconds). The aspheric lenses were tested in a THz time-domain spectroscopy system and the imaging performance characteristics, such as FOV, axial resolution and spot size, were determined and compared to simulations using reflective spherical targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pancreatic surgery is a highly demanding and routinely applied procedure for the treatment of several pancreatic lesions. The outcome of patients with malignant entities crucially depends on the margin resection status of the tumor. In this study we describe the application of fiber-based attenuated total reflection infrared (ATR IR) spectroscopy for label-free discrimination of normal pancreatic, tumorous and pancreatitis tissue. The method was applied for the unprocessed freshly resected tissue samples of 40 patients, and a classification model for differentiating between the distinct tissue classes was established. The developed three-class classification model for tissue spectra allows the delineation of tumors from normal and pancreatitis tissues. The classification algorithm provides probability values for each sample to be assigned to normal, tumor or pancreatitis classes. The established probability values were transferred to a Red-Green-Blue (RGB) color plot. Subsequently, the method was translated into intraoperative application. Fiber optic ATR IR spectra were obtained from freshly resected pancreatic tissue directly in the operating room. The spectroscopic findings could subsequently be confirmed by the histology gold standard. This study shows the possibility of applying fiber-based ATR IR spectroscopy in combination with a supervised classification model for rapid pancreatic tissue identification with a high potential for transfer into intraoperative surgical diagnostics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive surgeries use small incisions through needles for operations to be conducted from outside the patient’s body. Therefore, an accurate map of the distribution of tissues in real-time is critical to ensure patient safety. In this work, we explore all optical sensing methods as simple, fast, and economic alternatives to commercial imaging modalities. Simulated tissues have been prepared using gelatin to conduct optical characterization experiments. Transmission and fluorescence spectra on homogenous and heterogenous gelatin with different concentrations would be reported, with a focus on developing an optoelectronic technique for mapping of tissue distribution. Finally, this technique would be validated through real-time needle insertion experiment into a gelatin sample to track the spectral data of the tissue environments. This work could help track biological tissues where the spectral data could help surgeons visualize the needle-tissue environments in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surgery of chronic otitis media (COM) is a sensitive procedure, where the success rate crucially depends on the surgeon and the significant depth visibility of the surgical microscope. Additionally, videotapes have been frequently adapted for surgical guidance at operation theaters. While these approaches provide great views and assistance during the surgery, it has proven more challenging to derive morphological and volumetric information on subsurface layers of COM. To address this issue, an intra-surgical spectral-domain optical coherence tomography (OCT) microscope system with an extended working distance of 280 mm was developed, which has augmented reality on the ocular eyepiece of a surgical microscope for more effective visualization of morphological structures during mastoidectomy or tympanoplasty. The cross-sectional OCT images guide surgeons to more easily identify targeted regions by displaying depth direction information in real-time during surgery. Three patients with COM participated in this study, and the lesion conditions of the temporal bone were observed with pre-operative computed tomography (CT) before the surgery. Moreover, pure tone audiogram examinations were performed to evaluate pre- and post-surgical conditions. The pure tone audiogram reveals that the operation was well performed based on the air-bone gap (ABG) reduction, and it can be confirmed that the hearing level was also improved. The success of the surgical procedure was confirmed through the intraoperative OCT images, and the post-examined audiogram results further confirmed the improvement of hearing. Hence, the integration of intra-surgical OCT and audiogram inspection methods revealed the potential merits of the proposed methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the narrow viewing angle of the endoscope, it is difficult to grasp the entire image of the digestive tract at once. Therefore, blind spots may occur due to the shape of the gastrointestinal tract, which may result in missed lesions. Virtual endoscopy using CT is the current standard method for obtaining an overall view of the digestive tract. However, since virtual endoscopy detects surface irregularities, it cannot detect lesions without irregularities, including early cancers. In our previous study, we proposed a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. However, stereo endoscopes increase the burden on the patient due to the larger endoscope tube. Therefore, in this paper, we propose method of endoscopic entire 3D image acquisition of the digestive tract using a monocular endoscope. The method is as follows: 1) Move the endoscope to capture a series of images of the digestive tract, 2) Estimate the position of the endoscope in each frame by image analysis, 3) acquire 3D points using the continuous images and 4) Reconstruct an entire image of the digestive tract from the 3D point cloud. To confirm the effectiveness of this method, an experiment was conducted using a pattern placed inside a straight tube. The results suggest that this method of endoscopic entire 3D image acquisition of the digestive tract using a monocular endoscope may be able to determine the 3D location of lesions such as early-stage esophageal cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In mitral valve reconstruction, an annuloplasty band is sutured the patient's valve to restore valve function. The size of the band is determined intraoperatively by inserting a measuring device. However, the insertion of the measuring device is difficult in minimally invasive surgery with small incisions, which has become popular in recent years. We previously proposed a system that superimposes a virtual image of a valve ring on an endoscopic image using AR (Augmented Reality) technology to assist in surgery. However, the system was limited to superimposing a virtual image of a ring on an endoscopic image taken from a single viewpoint. In this study, therefore, a more accurate simulation that can be checked from various directions based on 3D shape data. First, the 3D information of the marker area is estimated using stereo method. Next, the spectral reflectance is estimated from color images obtained from a stereo endoscope and marker areas are extracted. Finally, the annuloplasty band automatically selected from 3D positions of the markers is displayed on the 3D simulation. To evaluate this method, we performed the spectral reflectance estimation and 3D simulation using a pig heart marked with surgical maker and confirmed its effectiveness. This system enables size selection without inserting measuring instruments during surgery and is expected to shorten the operation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot-assisted minimally invasive surgery is an emerging technology where the incision needle is operated by a robot manipulator to assist surgeons in performing interventional procedures such as biopsy and brachytherapy. Most robotic systems previously designed for needle interventions are stand-alone and operate in coplanar fashion, which require external mechanisms such as robot arms to align the needle onto the target tissue plane. In this work, we design a portable and light-weight needle steering platform that connects as an end-effector to a 6 degree-of-freedom industrial robot arm such as a FANUC robot. Standard FANUC operating functions would be used to control the motion of the end-effector and insert needles into the target tissues. Simulated gelatin tissues are used to perform needle insertions, and the performance of the end-effector is tested by changing the position and orientation of the tissue platforms. Finally, the proposed system will be tested for scalability by integrating with other industrial robot arms such as Yaskawa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-photon microscopy provides subcellular-resolution imaging deep into animal tissues, and is frequently used for the study of biological and disease processes. However, its applicability in vivo is complicated by physiological movement. We demonstrate treatment with dexmedetomidine (DEX), an already FDA-approved alpha-2 adrenergic receptor agonist, to reduce tissue oscillation frequency and amplitude in the livers of anesthetized mice. Fluorescence intensity and focal quality fluctuations were found to improve for 30 minutes after administration, indicating that dexmedetomidine may be used to improve imaging quality in two-photon intravital microscopy studies. These results will likely generalize to other imaging modalities, target tissues, and animal models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Oral cancer is one of the most malignant cancers in the world. Early-stage diagnosis of oral cancer is complex process due to the multifocal unspecific development of non-malignant lesions into cancer and impossibility to take biopsy of every lesion. The aim of this study is to develop a screening method for oral cancer diagnosis at early stages using surface enhanced Raman spectroscopy (SERS) and validate the performance of a multimodal system including Raman spectroscopic (RS) and diffuse reflectance spectroscopy (DRS) oral cancer diagnosis and accurate margin detection. The study will involve the identification and integration of spectral biomarkers involved in the carcinogenesis process from different modalities. Each modality SERS, RS and DRS is calibrated and standardized individually. Patients suffering from oral squamous cell carcinoma and other malignant diseases going through biopsy or histopathological examination are enrolled in this study. Ex vivo study involves the SERS analysis of saliva specimen and in vivo analysis will involve measurements on various tissue types, including malignant tissue and healthy contralateral site to evaluate the reproducibility and signal-to-noise ratio using fiber-optic probes for Raman and DRS systems. Feature selection methods and further machine learning tools will be used to discriminate between healthy, benign and cancer lesions based on spectral information and to identify important biomarkers. After data collection, clinician will perform a normal biopsy procedure and histopathological analysis, which will serve as gold standard to determine the sensitivity and specificity of the spectroscopy techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: Cystoscopy is a common urological procedure for evaluation and treatment of diseases of the lower urinary tract. Cystoscopic recognition of cancerous and benign lesions of the bladder may be challenging given the wide spectrum of pathology and the experience of the urologist. Although computer-aided detection tools hold the potential to improve the urologist’s performance, compiling a comprehensive dataset to train such tools remain a challenge. Educational cystoscopy atlas represents an alternative strategy to overcome this challenge. Materials and Methods: Mimicking the human behavior, we utilized an educational cystoscopy atlas to develop deep learning models for lesion detection. We applied a total of 312 images representing 7 major urologic findings from a cystoscopy atlas. A random image augmentation was applied to these images and included color contrast manipulation, image flipping and shearing. We utilized a neural architecture search to determine the optimal model design; For each case, we examined the area under the receiver operating characteristic (AUROC), the specificity, sensitivity, the positive predictive value (PPV) and the negative predictive value (NPV) and to classify frames according to the cancer presence status on 68 cystoscopy videos. Results: The median per-case AUROC for the frame classification was 0.680 by a median specificity of 0.312 and a median sensitivity of 0.867. At frame level, the median per-case PPV was 0.347 and the median per-case NPV was 0.837. All lesions were correctly flagged by our model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.