Positive margin status after breast-conserving surgery (BCS) is a predictor of higher rates of local recurrence. Intraoperative margin assessment aims to achieve negative surgical margin status at the first operation, thus reducing the re-excision rates that are usually associated with potential surgical complications, increased medical costs, and mental pressure on patients. Microscopy with ultraviolet surface excitation (MUSE) can rapidly image tissue surfaces with subcellular resolution and sharp contrasts by utilizing the nature of the thin optical sectioning thickness of deep ultraviolet light. We have previously imaged 66 fresh human breast specimens that were topically stained with propidium iodide and eosin Y using a customized MUSE system. To achieve objective and automated assessment of MUSE images, a machine learning model is developed for binary (tumor vs. normal) classification of obtained MUSE images. Features extracted by texture analysis and pre-trained convolutional neural networks (CNN) have been investigated for sample descriptions. A sensitivity, specificity, and accuracy better than 90% have been achieved for detecting tumorous specimens. The result suggests the potential of MUSE with machine learning being utilized for intraoperative margin assessment during BCS.
Deep learning-based image denoising and reconstruction methods have shown promising results for low-dose CT. When high-quality reference images are not available for training the network, researchers found a powerful and effective counterpart called Noise2Noise, which trains the neural network using paired data with independent noise. However, it is uncommon to have paired CT scans with independent noise (e.g., from two scans). In this paper, a method is proposed to generate such paired data for potential usage in deep learning training by simultaneously simulating a low-dose image at arbitrary dose level and an image with independent noise from a single CT scan. Their independence is investigated both analytically and numerically. In our numerical study, a Shepp-Logan phantom was utilized in MATLAB to generate the ground-truth, normal-dose, and low-dose images for reference. Noise images were obtained for analysis by subtracting the ground-truth from the noisy images, including the normal-dose/low-dose images and the paired products of our proposed method. Our numerical study matches the analytical results very well, showing that the paired images are not correlated. Under an additional assumption that they form a bivariate normal distribution, they are also independent. The proposed method can produce a series of paired images at arbitrary dose level given one CT scan, which provides a powerful new method to enrich the diversity of low-dose data for deep learning.
Liver vessel segmentation is important in diagnosing and treating liver diseases. Iodine-based contrast agents are typically used to improve liver vessel segmentation by enhancing vascular structure contrast. However, conventional computed tomography (CT) is still limited with low contrast due to energy-integrating detectors. Photon counting detector-based computed tomography (PCD-CT) shows the high vascular structure contrast in CT images using multi-energy information, thereby allowing accurate liver vessel segmentation. In this paper, we propose a deep learning-based liver vessel segmentation method which takes advantages of the multi-energy information from PCD-CT. We develop a 3D UNet to segment vascular structures within the liver from 4 multi-energy bin images which separates iodine contrast agents. The experimental results on simulated abdominal phantom dataset demonstrated that our proposed method for the PCD-CT outperformed the standard deep learning segmentation method with conventional CT in terms of dice overlap score and 3D vascular structure visualization.
Accurately segmenting organs in abdominal computed tomography (CT) scans is crucial for clinical applications such as pre-operative planning and dose estimation. With the recent advent of deep learning algorithms, many robust frameworks have been proposed for organ segmentation in abdominal CT images. However, many of these frameworks require large amounts of training data in order to achieve high segmentation accuracy. Pediatric abdominal CT images containing reproductive organs are particularly hard to obtain since these organs are extremely sensitive to ionizing radiation. Hence, it is extremely challenging to train automatic segmentation algorithms on organs such as the uterus and the prostate. To address these issues, we propose a novel segmentation network with a built-in auxiliary classifier generative adversarial network (ACGAN) that conditionally generates additional features during training. The proposed CFG-SegNet (conditional feature generation segmentation network) is trained on a single loss function which combines adversarial loss, reconstruction loss, auxiliary classifier loss and segmentation loss. 2.5D segmentation experiments are performed on a custom data set containing 24 female CT volumes containing the uterus and 40 male CT volumes containing the prostate. CFG-SegNet achieves an average segmentation accuracy of 0.929 DSC (Dice Similarity Coefficient) on the prostate and 0.724 DSC on the uterus with 4-fold cross validation. The results show that our network is high-performing and has the potential to precisely segment difficult organs with few available training images.
Computer-aided classification of breast cancer using histopathological images can play a significant role in clinical practice by detecting the distinct type of malignant and/or benign tumor. However, currently proposed deep learning models developed using the BreakHis dataset only conduct a binary classification between benign and malignant tumors, and are also scale-dependent. This study utilizes a ResNet-50 implementation to transform images from the four magnification factors such that all images can be used for training the deep neural network. This process yields a larger training set that is also scale-independent. For this paper, we utilized a dual step approach with the first pass being binary classification and the second pass being a multi-class classifier of malignant tumors that offers higher clinical utility.
In general X-ray radiography, inconsistency of brightness and contrast in initial presentation is a common complaint from radiologists. Inconsistencies, which may be a result of variations in patient positioning, dose, protocol selection and implant could lead to additional workflow by technologists and radiologists to adjust the images. To tackle this challenge posed by conventional histogram-based display approach, an AI Based Brightness Contrast (AI BC) algorithm is proposed to improve the consistency in presentation by using a residual neural network trained to classify X-ray images based on N by M grid of brightness and contrast combinations. More than 30,000 unique images from sites in US, Ireland and Sweden covering 31 anatomy/view combinations were used for training. The model achieved an average test accuracy of 99.2% on a set of 2700 images. AI BC algorithm uses the model to classify and adjust images to achieve a reference look and then further adjust to achieve user preference. Quantitative evaluation using ROI based metrics on a set of twelve wrist images showed a 53% reduction in mean pixel intensity variation and a 39% reduction in bone-tissue contrast variation. A study with application specialists adjusting image presentation of 30 images covering 3 anatomies (foot, abdomen and knee) was performed. On average, the application specialists took ~20 minutes to adjust the conventional set, whereas they took ~10 minutes for the AI BC set. The proposed approach demonstrates the feasibility of using deep learning technique to reduce inconsistency in initial display presentation and improve user workflow.
Significance: Re-excision rates for women with invasive breast cancer undergoing breast conserving surgery (or lumpectomy) have decreased in the past decade but remain substantial. This is mainly due to the inability to assess the entire surface of an excised lumpectomy specimen efficiently and accurately during surgery.
Aim: The goal of this study was to develop a deep-ultraviolet scanning fluorescence microscope (DUV-FSM) that can be used to accurately and rapidly detect cancer cells on the surface of excised breast tissue.
Approach: A DUV-FSM was used to image the surfaces of 47 (31 malignant and 16 normal/benign) fresh breast tissue samples stained in propidium iodide and eosin Y solutions. A set of fluorescence images were obtained from each sample using low magnification (4 × ) and fully automated scanning. The images were stitched to form a color image. Three nonmedical evaluators were trained to interpret and assess the fluorescence images. Nuclear–cytoplasm ratio (N/C) was calculated and used for tissue classification.
Results: DUV-FSM images a breast sample with subcellular resolution at a speed of 1.0 min / cm2. Fluorescence images show excellent visual contrast in color, tissue texture, cell density, and shape between invasive carcinomas and their normal counterparts. Visual interpretation of fluorescence images by nonmedical evaluators was able to distinguish invasive carcinoma from normal samples with high sensitivity (97.62%) and specificity (92.86%). Using N/C alone was able to differentiate patch-level invasive carcinoma from normal breast tissues with reasonable sensitivity (81.5%) and specificity (78.5%).
Conclusions: DUV-FSM achieved a good balance between imaging speed and spatial resolution with excellent contrast, which allows either visual or quantitative detection of invasive cancer cells on the surfaces of a breast surgical specimen.
Breast cancer is the most commonly diagnosed cancer among women. Positive margin status after breast-conserving surgery (BCS) is a predictor of higher rates of local recurrence. Intraoperative margin detection helps to complete tumor excision at the first operation. A margin tool that is capable of imaging all six margins of large lumpectomy specimens with both high resolution and fast speed (within 20 min) is yet to be developed. Deep UV light allows simultaneous excitation of multiple fluorophores and generating surface fluorescence images. We have developed a deep UV fluorescence scanning microscope (DUV-FSM) for slide-free, high-resolution and rapid examination of tumor specimens during BCS. The DUV-FSM uses a deep UV LED for oblique back illumination of freshly excised breast tissues stained with propidium iodide and Eosin Y and motorized XY stages for mosaic scanning. Fluorescence images are captured by a color CCD camera. Both invasive lobular carcinoma (ILC) and invasive ductal carcinoma (IDC) images showed excellent contrast from that of the normal cells in color, tissue texture, and cell density and shapes. This contrast have been consistently observed in all samples (n = 20) we have imaged so far. Statistical analysis showed a significant difference (p<0.0001) in nucleus-to-cytoplasm (NC) ratio between normal and invasive tissues. Thus, it may be utilized either visually by a trained individual or quantitatively by an algorithm to detect positive margins of lumpectomy specimens intraoperatively.
Model based iterative reconstruction (MBIR) algorithms have shown significant improvement in CT image
quality by increasing resolution as well as reducing noise and artifacts. In diagnostic protocols, radiologists
often need the high-resolution reconstruction of a limited region of interest (ROI). This ROI reconstruction
is complicated for MBIR which should reconstruct an image in a full field of view (FOV) given full sinogram
measurements. Multi-resolution approaches are widely used for this ROI reconstruction of MBIR, in which the
image with a full FOV is reconstructed in a low-resolution and the forward projection of non-ROI is subtracted
from the original sinogram measurements for high-resolution ROI reconstruction. However, a low-resolution
reconstruction of a full FOV can be susceptible to streaking and blurring artifacts and these can be propagated
into the following high-resolution ROI reconstruction. To tackle this challenge, we use a coupled dictionary
representation model between low- and high-resolution training dataset for artifact removal and super resolution
of a low-resolution full FOV reconstruction. Experimental results on phantom data show that the restored full
FOV reconstruction via a coupled dictionary learning significantly improve the image quality of high-resolution
ROI reconstruction for MBIR.
This paper describes a fast multi-scale vessel enhancement filter in 3D medical images. For efficient review
of the vascular information, clinicians need rendering the 3D vascular information as a 2D image. Generally,
the maximum intensity projection (MIP) is a useful and widely used technique for producing a 2D image from
the 3D vascular data. However, the MIP algorithm reduces the conspicuousness for small and faint vessels
owing to the overlap of non-vascular structures. To overcome this invisibility, researchers have examined the
multi-scale vessel enhancement filter based on a combination of the eigenvalues of the 3D Hessian matrix. This
multi-scale vessel enhancement filter produces higher contrast. However, it is time-consuming and requires high
cost computation due to large volume of data and complex 3D convolution. For fast vessel enhancement, we
propose a novel multi-scale vessel enhancement filter using 3D integral images and 3D approximated Gaussian
kernel. This approximated kernel looks like cube but it is not exact cube. Each layer of kernel is approximated
2D Gaussian second order derivative by dividing it into three rectangular regions whose sum is integer. 3D
approximated kernel is a pile of these 2D box kernels which are normalized by Frobenius norm. Its size fits to
vessel width in order to achieve better visualization of the small vessel. Proposed method is approximately five
times faster and produces comparable results with previous multi-scale vessel enhancement filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.