PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095401 (2019) https://doi.org/10.1117/12.2533352
This PDF file contains the front matter associated with SPIE Proceedings Volume10954, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PACS and Clinical Multimedia Data for Non-radiology Images
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095402 (2019) https://doi.org/10.1117/12.2512458
The rise of deep learning (DL) framework and its application in object recognition could benefit image-based medical diagnosis. Since eye is believed to be a window into human health, the application of DL on differentiating abnormal ophthalmic photography (OP) will greatly empower ophthalmologists to relieve their workload for disease screening. In our previous work, we employed ResNet-50 to construct classification model for diabetic retinopathy(DR) within the PACS. In this study, we implemented latest DL object detection and semantic segmentation framework to empower the eye-PACS. Mask R-CNN framework was selected for object detection and instance segmentation of the optic disc (OD) and the macula. Furthermore, Unet framework was utilized for semantic segmentation of retinal vessel pixels from OP. The performance of the segmented results by two frameworks achieved state-of-art efficiency and the segmented results were transmitted to PACS as grayscale softcopy presentation state (GSPS) file. We also developed a prototype for OP quantitative analysis. It’s believed that the implementation of DL framework into the object recognition and analysis on OPs is meaningful and worth further investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095403 (2019) https://doi.org/10.1117/12.2512784
In radiography and fluoroscopy, the dose-area product (DAP) is used for dose documentation and the evaluation, whether the applied dose is too high, adequate or too low. In dose management systems (applied in fluoroscopy and radiography) a mean value of the DAP of a number of consecutive examinations is calculated and compared to the diagnostic reference levels of the different examination types. This shows, if on average the dose level is too high. However, on an individual this would not work. To achieve a radiograph of adequate image quality the required DAP for a slender patient is significantly lower than for a standard patient and vice versa for obese patients. Thereby, without knowledge about patient thickness, there is no way to judge, if the dose level for an individual would be appropriate. To overcome this problem, an estimate of the patient size was calculated from information of the dicom header of the images. By extracting the dose at the detector, the DAP, exam type, information about the beam quality of the used radiation (spectrum) and the exposed area of the detector an estimate of the water equivalent patient thickness can be determined. Monte Carlo simulations and measurements with varying thicknesses of a water phantom were in excellent agreement. The accuracy of the estimate was better than 1 cm. Further clinical experiments with patients undergoing an examination of the lumbar spine showed, that an accuracy better than 20% and a standard derivation of 10% is achievable. Therefore an automatic estimate of the patient thickness in fluoroscopy and radioscopy is feasible and facilitates a computer-based judgement if the dose for an individual patient is adequate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095404 (2019) https://doi.org/10.1117/12.2513774
Research in the area of sports science and performance enhancement revolves around collecting multimedia data (eg, Video, images, and waveforms), processing them, and quantifying the results, which gives insight to help athletes improve their technique. For example, in long jump in track and field, the processed output of video with force vector overlays and force calculations allow coaches to view specific stages of the hop, step, and jump, and identify how each stage can be improved to increase jump distance. Outputs also provide insight into how athletes can better maneuver to prevent injury. Currently, each data collection site collects and stores data with their own methods without a standard. Different research groups store multimedia files and quantified results in different kinds of formats, structures, and locations. Imaging informatics-based principles were adopted to develop a platform for multiple institutions that promotes the standardization of sports performance data. The system will provide user authentication and privacy as in clinical trials, with specific user access rights. Data collected from different field sites will be standardized into specified formats before database storage similar to field sites in clinical imaging-based trials. Quantified results from image-processing algorithms are stored similar to CAD algorithm results. The system will streamline the current sports performance data workflow and provide a user interface for different users including researchers, athletes, coaches to view results of individual collections and longitudinally across different collections. The developed data viewer will allow for easy access and review of data to improve sports performance and prevent injury.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095405 (2019) https://doi.org/10.1117/12.2513046
With advances in data acquisition devices, current rehabilitation research involves extensive experimental trials to identify the underlying cause or long-term consequences of certain pathologies and to improve motor functions by examining the movement patterns of affected individuals. For research focused on movement analysis, high volume of multimedia data such as high-speed video is acquired with other data types such as surveys, spreadsheets, force data recordings from instrumented surfaces etc. These datasets are recorded from various standalone sources however multimedia data in rehabilitation research is often requires analysis in context with subject’s medical history, treatment plan or progress. But in current research workflow, no pathways and data handling protocols are defined to successfully achieve comprehensive integration of multimedia data, for various data types collected at various stages in a clinical trial, different data handling protocols needs to be defined and designed. In this presentation, we will be focusing on multimedia data only such as high-speed video files collected during a clinical trial. Multimedia data collected during a rehabilitation research often end up residing in an isolated space. Our aim for this presentation is to design and evaluate data handling steps for successful integration, storage and retrieval of multimedia data. This presentation focuses on establishing tools that can be used between data acquisition step to data storage for media files. We will present method for metadata creation for multimedia data based on the electronic patient data (ePR) model design as well as two common standards of medical imaging, DICOM and PACS to form an effective data handling protocol. For evaluation, data set collected for wheelchair movement analysis study at Rancho Los Amigos National Rehabilitation Center in Downey, California will be used. The broader aim of this paper is to present development of standards and protocols for multimedia data handling in a clinical workflow based on medical imaging informatics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095406 (2019) https://doi.org/10.1117/12.2512068
In this paper, we present the previous development and deployment of Fundus Analysis Software Tool (FAST) to enable the analysis of different anatomical features and pathologies within fundus images over time, and demonstrate its usefulness with three use cases. First, we utilized FAST to acquire 616 fundus images from a remote clinic in a HIPAAcompliant manner. An ophthalmologist at the clinic then used FAST to annotate 190 fundus images containing exudates at the pixelwise level in a time-efficient manner. In comparison with publicly available datasets, our dataset constitutes the largest pixelwise-labeled collection of images and the first exudate segmentation dataset with eye-matched pairs of images for a given patient. Second, we developed an optic disk CAD segmentation algorithm, where our algorithm achieved a mean intersection over union of 0.930, comparable to the disagreement between ophthalmologist annotations. We deployed this algorithm into FAST, where it segments and flushes the segmentation onto the computer screen while simultaneously filling out specified optic disk fields of a DICOM-SR report on the fundus image. Third, we integrated our software with the open-source EHR framework OpenMRS, where our software can upload both automatic and manual analyses of the fundus to a remote server using HL7 FHIR standard then retrieve historical reports for a patient chronologically. Finally, we discuss our design decisions in developing FAST, particularly those relating to its treatment of DICOM-SR reports based on fundus images and its usage of the FHIR standard and its next steps towards enabling effective analyses of fundus images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3-D Printing, Augmented Reality, and Virtual Reality for Medical Applications
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095407 (2019) https://doi.org/10.1117/12.2511887
One of the most popular method of 3d datasets visualization is direct volume rendering. The paper describes an algorithm which accelerates an antialiasing process in it. The new method works in two passes: the first one is executed at pixel level, the second one is done at subpixel level. In the first pass the rays are going not through the pixel centers of resulting image but with half-pixel offset. The values of volume integral for the rays including values of the volume integral over a predefined set of intervals are stored in so-called G-buffer. In the second pass if the resulting colors for adjacent rays are close, then the color of the pixel between those rays is interpolated bilinearly; otherwise several subpixels are processed for the pixel to get more accurate color value. The color values over intervals from G-buffer are used to accelerate calculation of volume integral over subpixels’ rays. The more subpixels are used the higher efficiency the approach shows. The speed also increases with growing of dataset coherence. For example, for typical medical volume data selective antialiasing with four subpixels accelerates the rendering about three times in comparison with the full-screen antialiased direct volume rendering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095408 (2019) https://doi.org/10.1117/12.2512425
Screening training efficiency highly relies on appropriate interaction and feedbacks provided during training.1,2 A dedicated screening workstation and dedicated viewing software are de rigour for UK breast cancer screener training. These workstation and software are mainly manufactured by leading international vendors without critical technical aspects divulged to allow integrating 3rd party screener training solutions. A non-wearable AR approach has been developed and its accuracy has been quantitatively identified. As a follow-up, this study has refactored previous approaches on the wearable platform, Hololens. Wearable AR solutions are considerably user-friendly in degrees of freedom movements whilst they are seamlessly integrated and less customisable. It has not been aware that screening-suitable room-scale AR approaches have been developed and adopted. However, wearable AR techniques have relatively sophisticated apparatus developed for personal usage. In this study, Hololens is adopted and the difficulties of employing wearable AR techniques on screening training are systematically addressed. It is found infrared sensors of wearable AR solutions cannot retrieve spacious data correctly in the real world while the detected object is a monitor screen or other infrared-relative objects. Moreover, Hololens has the difficulty of detecting large objects and its interaction range and visible ranges are both quite limited. Whereas an alternative method is developed for Hololens and it is fully functional for screening training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095409 (2019) https://doi.org/10.1117/12.2512843
This study aims to display the ability and efficacy of 3D printing image-based, implantable biological scaffolds with varying properties. In this study, scaffolds were printed using various ratios of hydroxyapatite (HA) to polycaprolactone (PCL) to display a spectrum of properties suitable for musculoskeletal scaffolds. As an initial application of this method, scaffolds were generated from a series of one hundred DICOM images for a 60-year-old, female proximal femur. Additional structures, including a printed box and a circular lattice were generated. These models were printed at HA to PCL ratios (m/m) of 1:9, 2:8, 3:7, 4:6, 5:5, 6:4, 7:3, 8:2, and 9:1. Postprinting analysis of the ratios was performed with scanning electron microscopy to observe the prints’ microstructure. Post printing analysis also included a compression test to observe biomechanical properties and a cell culture on the prints to observe cellular viability and adhesion. Ratios showed vast microstructural differences. It was also found that the 6:4 sample had the most similar surface level microstructure to that of human trabecular bone. The compression test revealed a positive correlation (R2 = 0.92) between HA concentration (%) and stiffness (N/mm). Cellular viability and adhesion were confirmed for 10 days after initial seeding cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540A (2019) https://doi.org/10.1117/12.2513411
Quantitative assessment is essential to ensure correct diagnosis and effective treatment of chronic wounds. So far, devices with depth cameras and infrared sensors have been used for the computer-aided diagnosis of cutaneous wounds. However, these devices have limited accessibility and usage. On the other hand, smartphones are commonly available, and threedimensional (3D) reconstruction using smartphones can be an important tool for wound assessment. In this paper, we analyze various open source libraries for smartphone-based 3D reconstruction of wounds. For this, point clouds are obtained from cutaneous wound regions using Google ARCore and Structure from Motion (SfM) libraries. These point clouds are subjected to de-noising filters to remove outliers and to improve the density of the point cloud. Subsequently, surface reconstruction is performed on the point cloud to generate a 3D model. Six different mesh-reconstruction algorithms namely Delaunay triangulation, convex hull, point crust, Poisson surface reconstruction, alpha complex, and marching cubes are considered. The performances are evaluated using the quality metrics such as complexity, the density of point clouds, the accuracy of depth information and the efficacy of the reconstruction algorithm. The result shows that the point clouds are able to perform 3D reconstruction of wounds using open source libraries. It is found that the point clouds obtained from SfM have higher density and accuracy as compared to ARCore. Comparatively, the Poisson surface reconstruction is found to be the best algorithm for effective 3D reconstruction from the point clouds. However, research is still required on the techniques to enhance the quality of point clouds obtained through the smartphones and to reduce the computational cost associated with point cloud based 3D-reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540B (2019) https://doi.org/10.1117/12.2512779
Purpose: To develop a novel in vitro method of evaluating coronary artery ischemia using a combination of noninvasive coronary CT angiograms (CCTA) and 3D printing. Materials and Methods: Five patients with varying degrees of coronary artery disease who underwent non-invasive CCTA scans and invasive fractional flow reserve (FFR) of their left anterior descending coronary artery (LAD) were included in this study. The LAD artery was segmented and reconstructed using Mimics (Materialise). The segmented models were then 3D printed using a Carbon (Carbon Inc.,) and Objet260 Connex (Stratasys) printers with urethane methacrylate (UMA) family of rigid resins and Veroclear, respectively. An in vitro flow circulation system representative of invasive measurements in a cardiac catheterization laboratory was developed to experimentally evaluate the hemodynamic parameters of pressure and flow. Physiological coronary circulation was modelled in vitro as flow‐dependent stenosis resistance in series with variable downstream resistance. A range of physiological flow rates was applied by a peristaltic steady flow pump and titrated by a flow sensor. The pressure drop and the pressure ratio (Pd/Pa) were assessed for patient-specific aortic pressure and differing flow rates to evaluate FFR in vitro. Results: For these five models, there was a good positive correlation (r = 0.78) between the in vitro and invasive FFR. The mean differences, as assessed by Bland-Altman analysis, between in vitro and invasively measured FFR was at 0.01±0.1 (95% limit of agreement -0.1169 to 0.1369). Conclusions: 3D printed patient-specific models can be used in a non-invasive in vitro environment to quantify coronary artery ischemia as assessed by invasive FFR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540C (2019) https://doi.org/10.1117/12.2512528
Purpose: 3D printed (3DP) patient specific vascular phantoms provide the ability to improve device testing and to aid in the course of treatment of vascular disease, while reducing the need for in-vivo experiments. In addition to accurate vascular geometric reproducibility, such phantoms could allow simulation of certain vascular mechanical properties. We investigated various 3DP designs to allow simulation of physiological transmural pressure on phantom vasculature. Materials and Methods: A transparent compliance chamber was created using an Eden260V printer (Stratasys) with VeroClear and acrylic to accommodate 3DP patient specific vascular phantoms. The patient vascular geometries were acquired from a CT angiogram (Aquilion ONE, Canon Medical) and segmented using Vitrea workstation (Vital Images). The segmented geometry was manipulated in Autodesk Meshmixer and 3D printed using Agilus. The phantom was integrated in the compliance chamber and connected to a pump which simulated physiologic pulsatile flow waveforms. Compliance of the vessels was varied by filling the chamber with various levels of liquid and air. This setup allowed controlled expansion of the 3DP arteries, as a function of the liquid level while a programmable pump simulated the blood flow through the vascular network. The pressure within the vessels was measured for various compliancy settings while physiological flow rates were simulated through the arteries. Results: A neurovascular phantom was placed in the chamber and amount of artery expansion diameter was controlled by changing the liquid level in the compliance chamber. Artery patency and contrast flow were demonstrated using x-ray angiography. The pressures in the left and the right internal carotid artery increased from 98mmHg to 104mmHg and from 96mmHg to 102mmHg, respectively, while maintaining the same flow rates. Conclusions: 3D printed patient specific neurovascular phantoms can be manipulated through using of a compliance chamber in order to establish physiologically relevant hemodynamic conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540E (2019) https://doi.org/10.1117/12.2513026
Deep learning can be used to classify images to verify or correct DICOM header information. One situation where this is useful is in the classification of thoracic radiographs that were acquired anteroposteriorly (AP) or posteroanteriorly (PA). A convolutional neural network (CNN) was previously trained and showed a strong performance in the task of classifying between AP and PA radiographs, giving a 0.97 ± 0.005 AUC for an independent test set. However, 81% of the AP training set and 24% of the AP independent test set consisted of images with imprinted labels. To evaluate the effect of labels on training and testing of a CNN, the labels on the images used for training were removed by cropping. Then the CNN was retrained using the cropped images with the same training parameters as before. The retrained CNN was tested on the same independent test set and resulted in a 0.95 ± 0.007 AUC in the task of classifying between AP and PA radiographs. The p-value is 0.002 between the AUCs from the two networks, showing a statistically significant decrease in performance by the network trained on the cropped images. The decrease in performance may be due to the network being previously trained to recognize imprinted labels or due to relevant anatomy being cropped along with the label, however, the performance is still high and can be incorporated in clinical workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540F (2019) https://doi.org/10.1117/12.2513090
Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Manual tumor annotation by radiologists requires medical knowledge and is time-consuming, subjective, prone to error, and inter-user inconsistency. Several recent studies have shown the ability of deep-learning models in image segmentation. In this work, we investigated a deep-learning based method to segment breast tumors in Dynamic Contrast-Enhanced MRI (DCE-MRI) scans in both 2D and 3D settings. We implemented our method and evaluated its performance on a dataset of 1,246 breast MR images by comparing the segmentation to the manual annotations from expert radiologists. Experimental results showed that the deep-learning-based methods exhibit promising performance with the best Dice Coefficient of 0.92 ± 0.02.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540H (2019) https://doi.org/10.1117/12.2512798
Ophthalmologists use the optic disc to cup ratio as one factor to diagnose glaucoma. Optic disc in fundus images is the area where blood vessels and optic nerve fibers enter the retina. A cup to disc ratio (the diameter of the cup divided by the diameter of the optic disc) greater than 0.3 is considered to be suggestive of glaucoma. Therefore, we are developing automatic methods to estimate optic disc and cup areas, and the optic disc to cup ratio. There are four steps to estimate the ratio: region of interest (ROI) area detection (where optic disc is in the center) from the fundus image, optic disc segmentation from the ROI, cup segmentation from the optic disc area, and cup to optic disc ratio estimation. This paper proposes an automated method to segment the optic disc from the ROI using deep learning. A Fully Convolutional Network (FCN) with a U-Net architecture is used for the segmentation. We use fundus images from MESSIDOR dataset in this experiment, a public dataset containing 1,200 fundus images. We divide the dataset into five equal subsets for training and independent testing (each set has four subsets for training and one subset for testing). The proposed method outperforms other existing algorithms. The results show 0.94 Jaccard index, 0.98 sensitivity, 0.99 specificity, and 0.99 accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540I (2019) https://doi.org/10.1117/12.2512683
Radiomics has shown promising results in several medical studies, yet it suffers from a limited discrimination and informative capability as well as a high variation and correlation with the tomographic scanner types, CT (Computer Tomography) scanner producers, pixel spacing, acquisition protocol and reconstruction parameters. This paper introduces a new method to transform image features in order to improve their stability across scanner producers and scanner models. This method is based on a two-layer neural network that can learn a non-linear standardization transformation of various types of features including hand-crafted and deep features. A publicly available database of phantom images with ground truth is used where the same physical phantom was scanned on 17 different CT scanners. In this setting, variations in extracted features are representative of true physio-pathological tissue changes in the scanned patients, so harmonized between scanner producers and models. The recent success of radiomics studies has often been restricted to relatively controlled environments. In order allow for comparing data of several hospitals produced with a larger variety of scanner producers and models as well as with several protocols, features standardization seems necessary to keep results comparable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Henghui Zhu, Ioannis Ch. Paschalidis, Amir M. Tahmasebi
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540J (2019) https://doi.org/10.1117/12.2512103
Recurrent Neural Network (RNN) models have been widely used for sequence labeling applications in different domains. This paper presents an RNN-based sequence labeling approach with the ability to learn long-term labeling dependencies. The proposed model has been successfully used for a Named Entity Recognition challenge in healthcare: anatomical phrase labeling in radiology reports. The model was trained on labeled data from a radiology report corpus and tested on two independent datasets. The proposed model achieved promising performance in comparison with other state-of-the-art context-driven sequence labeling approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540L (2019) https://doi.org/10.1117/12.2513013
In this study, we proposed a multi-space-enabled deep learning modeling method for predicting Oncotype DX recurrence risk categories from digital mammogram images on breast cancer patients. Our study included 189 estrogen receptor-positive (ER+) and node-negative invasive breast cancer patients, who all have Oncotype DX recurrence risk score available. Breast tumors were segmented manually by an expert radiologist. We built a 3- channel convolutional neural network (CNN) model that accepts three-space tumor data: the spatial intensity information and the phase and amplitude components in the frequency domain. We compared this multi-space model to a baseline model that is based on sorely the intensity information. Classification accuracy is based on 5- fold cross-validation and average area-under the receiver operating characteristics curve (AUC). Our results showed that the 3-channel multi-space CNN model achieved a statistically significant improvement than the baseline model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540M (2019) https://doi.org/10.1117/12.2513098
Deep learning models based on Convolutional Neural Networks (CNN) are known as successful tools in many classification and segmentation studies. Although these kinds of tools can achieve impressive performance, we still lack effective means to interpret the models, features, and the associated input data on how a model can work well in a data-driven manner. In this paper, we propose a novel investigation to interpret a deep-learningbased model for breast cancer risk prediction using screening digital mammogram images. First, we build a CNN-based risk prediction model by using normal screening mammogram images. Then we developed two different/separate schemes to explore the interpretability. In Scheme 1, we apply a sliding window-based approach to modify the input images; that is, we only keep the sub-regional imaging data inside the sliding window but padding other regions with zeros, and we observe how such an effective sub-regional input may lead to changes in the model’s performance. We generated heatmaps of the AUCs with regards to all sliding windows and showed that the heatmaps can help interpret a potential correlation/response between given sliding windows and the model AUC variation. In Scheme 2, we followed the saliency map-based approach to create a Contribution Map (CM), where the CM value of each pixel reflects the strength of that pixels contributes to the prediction of the output label. Then over a CM, we identify a bounding box around the most informative sub-area of a CM to interpret the corresponding sub-area in the images as the region that is most predictive of the risk. This preliminary study demonstrates a proof of concept on developing an effective means to interpret deep learning CNN models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Economics, Regulations, and Practice Innovation in Medical Imaging
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540N (2019) https://doi.org/10.1117/12.2509823
The purpose of this paper is the investigation of automatic evaluation of the quality of patient positioning and Field of View (FoV) in head CT scans. Studies have shown elevated risk of radiation-induced cataract in patients undergoing head CT examinations. The American Association of Physicists in Medicine (AAPM) published a protocol for head CT scans including requirements linking the optimal scan angle to anatomic landmarks in the skull. To help sensitizing staff for the need of correct patient positioning, a software-based tool detecting nonoptimal patient positioning was developed. Our experiments were conducted on 209 head CT exams acquired at the University Medical Center Hamburg Eppendorf (UKE). All of these examinations were done on the same Philips iCT scanner. Each exam contains a 3D volume with an in-plane voxel spacing of 0.44mm x 0.44mm and a slice distance of 1mm. As ground truth anatomic landmarks on the skull were annotated independently by three different readers. We applied an atlas registration technique to map CT scans to a probabilistic anatomical atlas. For a new CT scan, previously defined model landmarks were mapped back to the CT volume when registering it to the atlas thus labelling new head CT scans. From the location of the detected landmarks we derive the deviation of the actual head angulation and scan length from the optimal values. Furthermore, the presence of the eye-lenses in the FoV is predicted. The median error of the estimated landmark positions measured as distance to the plane generated from the ground truth landmark positions is below 1mm and comparable to the interobserver variability. A classifier for the prediction of the presence of the eye-lenses in the FoV from the estimated landmark locations achieves a κ value of 0.74. Furthermore there is moderate agreement of the estimated deviations of optimal head tilt and scan length with an expert’s rating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
John Stroud, Karl Stupic, Tucker Walsh, Zbigniew Celinski, Janusz H. Hankiewicz
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540O (2019) https://doi.org/10.1117/12.2512903
Previously it has been shown that heat induced in metallic objects due to fast switching magnetic gradients can pose a possible danger not only for patients undergoing MRI procedures, but also to medical staff present. In this study, we investigate magnetic flux changes and the effects of local heating in metallic elements by eddy currents due to fast switching magnetic gradients at several specific positions with respect to the isocenter of the gradient coils. Experiments were performed in a 3 T preclinical scanner with a 30 cm bore. To probe the induced electro motive force (EMF), which results in induced eddy currents, small pickup coils were used at various locations in the scanner. To investigate heating, metallic cylinders (12.5 mm diameter, 12.6 mm height) were prepared with a pinhole to accommodate miniature fiber optic temperature sensors. A gradient echo axial imaging sequence was applied for 4 minutes after achieving thermal equilibrium with the magnet. Three different materials were used in this study: copper, aluminum, and Ti-6AL-4V, a titanium alloy commonly used in orthopedic implants. In these experiments, a direct dependence on position and gradient strength on the induced EMF was observed. As expected, the increase of temperature depended directly on distance and material composition. Heating can be a danger during imaging of patients with conductive implants that are placed away from the isocenter. Thus, our findings are vital to patient safety and comfort during MRI procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540P (2019) https://doi.org/10.1117/12.2512910
The purpose of this work was to investigate the feasibility of utilizing an electronic medical record system, Epic (Epic Systems Corporation, Madison, WI), for tracking patient radiation doses following interventional fluoroscopy procedures. Following an interventional procedure, before an examination is marked as completed, the new technologist workflow necessitated inputting cumulative air kerma and dose/kerma area product reported by the interventional system in Epic. Data reported by the technologists is stored in Epic’s production database, then converted and stored in Epic’s analytical database, and accessed via a customized SQL query. Selectrus was used as a report delivery tool for internal extracts. Patient-specific radiation dose audits are performed using Epic’s front facing graphical user interface called Hyperspace. Using this platform periodic monitoring and auditing of radiation dose data has been successfully implemented at our institution over a year. All cases where radiation dose estimates have exceeded recommended thresholds have been audited and followed up with the respective physicians. Limitations of this approach include manual data entry by the technologists following the procedure which may be subject to human error. Most common error (< 1% of reviewed cases) is switching of cumulative air kerma and kerma/dose area product values; this error is easily identifiable given kerma/dose area product is greater than cumulative air kerma. Tracking patient radiation doses following interventional fluoroscopy procedures using Epic electronic medical record system is feasible. This type of setup is configurable at other healthcare institutions that utilize Epic system as their electronic medical record system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540Q (2019) https://doi.org/10.1117/12.2512686
Combat casualty care is a subfield of emergency medicine that requires intense situational awareness, encyclopedic knowledge, split-second decision making, and high-performing technology. Training medics with these skills requires much time and effort, yet even with the best training, medics can still experience numerous challenges. Artificial intelligence (AI) could offer numerous positive benefits in combat casualty care, but also has significant drawbacks and pitfalls. As a result, there is a vast, multi-dimensional space of possible AI systems and implications to be investigated. Given this context, it would be beneficial to develop a system for guiding research and development efforts in this arena. This paper describes our initial efforts to build a decision-making framework for that purpose. This framework should benefit the field of combat casualty care in at least two ways. First, the framework will support a comprehensive and holistic view of AI applications. Second, it should help to prioritize areas and techniques for research investments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540R (2019) https://doi.org/10.1117/12.2513050
Schizophrenia (SZ) is a chronic and severe mental disorder that affects how a person thinks, feels, and behaves. It has been proposed that this disorder is related to disrupted brain connectivity, which has been verified by many studies, but its underlying mechanism is still unclear. Recent advances have combined heterogeneous data including both medical images (e.g., fMRI) and genomic data (e.g., SNPs and DNA methylations), which give rise to a new perspective on SZ. In this paper, we aim to explore the associations between DNA methylations and various brain regions to shed light on the neuro-epigenetic interactions in the SZ disease. We proposed a joint Gaussian copula model, where we used the Gaussian copula model to address the data integration issue and the joint network estimation for different conditions (case-control study). Unlike previous studies using methods such as CCA or ICA, the proposed method not only can provide the neuro-epigenetic interactions but also the brain connectivity, and methylation selfinteractions all at the same time. The data we used were collected by the Mind Clinical Imaging Consortium (MCIC), which includes the fMRI image and the epigenetic information such as methylation levels. The data were from 183 subjects, among them 79 SZ patients and 104 healthy controls. We have identified several hub brain regions and hub DNA methylations of the SZ patients and have also detected 10 methylation-brain ROI interactions for SZ. Our analysis results are shown to be both statistically and biologically significant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540S (2019) https://doi.org/10.1117/12.2512045
Radiomics analysis has been shown to have considerable potential power for treatment assessment, cancer genetics analysis and clinical decision support. A broad set of quantitative features extracted from medical images is expected to build a descriptive and predictive model, which relating the image features to phenotypes or gene-protein signatures. As a common wrapper strategy, Backward Feature Elimination (BFE) algorithm is widely used to reduce the dimensionality of feature space. In this paper, we propose an effective BFE algorithm utilizing Random Forest (RF) to automatically select the optimal feature subset and try to predict the EGFR mutations using CT images. Firstly, the whole dataset was shuffled and the features were ranked by RF importance measures. Then, LASSO regression was iteratively used to perform both regularization and accuracy calculation in the BFE, ending when any further removals do not result in an improvement, to gain a series of feature subsets. Lastly, we gathered all the feature subsets in a feature counter and final feature subset was determined by hard voting with equal weight. The dataset consists of 130 CT image series with EGFR-mutated lung adenocarcinoma harboring Ex19 (n=56) and Ex21 (n=74) and more than 2000 radiomic features were extracted in each series. Seven features were selected as the set to predict EGFR mutation and all of the features were from Wavelet and Gabor filtered image. It reached best classification result (AUC 0.74, 95% CI, 0.67-0.84) on the K-nearest neighbors (KNN) model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540T (2019) https://doi.org/10.1117/12.2513018
The current health care approach for chronic care, such as glaucoma, has limitations for access to expert care and to meet the growing needs of a larger population of older adults who will develop glaucoma. The computer aided diagnosis system (CAD) shows great promise to fill this gap. Our purpose is to expand the initial fundus dataset called Retinal fundus Images for Glaucoma Analysis (RIGA) to develop collaborative image processing methods to automate quantitative optic nerve assessments from fundus photos. All the subjects were women and enrolled in an IRBMED protocol. The fundus photographs were taken using Digital Retinography System (DRS), which is dedicated for diabetic retinopathy screening. Among initial 245 photos, there were 166 photos that met quality assurance metrics for analysis and serve as RIGA2 dataset. Three glaucoma fellowship trained ophthalmologists performed various tasks on these photos. In addition, the cup to disc ratio (CDR) and the neuroretinal rim thickness for the subjects were assessed by slit lamp biomicroscopy and served as the gold standard measure. This RIGA2 dataset is additional 2D color disc photos resource, and multiple extracted features that serves the research community as a form of crowd sourcing analytical power in the growing teleglaucoma field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540U (2019) https://doi.org/10.1117/12.2512138
This paper proposed a new approach to design medical imaging-sharing service network based on professional medical imaging center (PMIC). PMIC is famous for advanced imaging modalities and expert resources. The network connects clinics, hospitals and PMICs to provide collaborative diagnosis, consultation, mobile expert consulting and medical imaging artificial intelligence (AI) analysis services through Internet. It allows patients to be registered in hospital and examined in PMIC. It provides to schedule and view patients exam from mobile devices. It also provides AI analysis for some specific kinds of medical images such as carotid plaque and mammary cancer, to help doctors get accurate conclusions. The network is flexible to use three layers architecture with secure messaging and data communication: data source, service cloud and service provider. It has been deployed in Guangzhou Huyun Medical Imaging Diagnosis Center since July 2018 to provide services for the First People’s Hospital of Guangzhou.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540V (2019) https://doi.org/10.1117/12.2512758
3D dose reconstruction for radiotherapy (RT) is the estimation of the 3D radiation dose distribution patients received during RT. Big dose reconstruction data is needed to accurately model the relationship between the dose and onset of adverse effects, to ultimately gain insights and improve today’s treatments. Dose reconstruction is often performed by emulating the original RT plan on a surrogate anatomy for dose estimation. This is especially essential for historically treated patients with long-term follow-up, as solely 2D radiographs were used for RT planning, and no 3D imaging was acquired for these patients. Performing dose reconstruction for a large group of patients requires a large amount of manual work, where the geometry of the original RT plan is emulated on the surrogate anatomy, by visually comparing the latter with the original 2D radiograph of the patient. This is a labor-intensive process that for practical use needs to be automated. This work presents an image-processing pipeline to automatically emulate plans on surrogate computational tomography (CT) scans. The pipeline was designed for childhood cancer survivors that historically received abdominal RT with anterior-to-posterior and posterior-to-anterior RT field set-up. First, anatomical landmarks are automatically identified on 2D radiographs. Next, these landmarks are used to derive parameters needed to finally emulate the plan on a surrogate CT. Validation was performed by an experienced RT planner, visually assessing 12 cases of automatic plan emulations. Automatic emulations were approved 11 out of 12 times. This work paves the way to effortless scaling of dose reconstruction data generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540W (2019) https://doi.org/10.1117/12.2512789
In this study, we analyzed baseline CT- and MRI-based image features of salivary glands to predict radiation-induced xerostomia after head-and-neck cancer (HNC) radiotherapy. A retrospective analysis was performed on 216 HNC patients who were treated using radiotherapy at a single institution between 2009 and 2016. CT and T1 post-contrast MR images along with NCI-CTCAE xerostomia grade (3-month follow-up) were prospectively collected at our institution. Image features were extracted for ipsilateral/contralateral parotid and submandibular glands relative to the location of the primary tumor. Dose-volume-histogram (DVH) parameters were also acquired. Features that were correlated with xerostomia (p<0.05) were further reduced using a LASSO logistic regression. Generalized Linear Model (GLM) and the Support Vector Machine (SVM) classifiers were used to predict xerostomia under five conditions (DVH-only, CT-only, MR-only, CT+MR, and DVH+CT+MR) using a ten-fold cross validation. The prediction performance was determined using the area under the receiver operator characteristic curve (ROC-AUC). DeLong’s test was used to determine the difference between the ROC curves. Among extracted features, 13 CT, 6 MR, and 4 DVH features were selected. The ROC-AUC values for GLM/SVM classifiers with DVH, CT, MR, CT+MR and all features were 0.72±0.01/0.72±0.01, 0.73±0.01/0.68±0.01, 0.68±0.01/0.63±0.01, 0.74±0.01/0.75±0.01, and 0.78±0.01/0.79±0.01, respectively. DeLong’s test demonstrated an improved in AUC for both classifiers with the addition of all features compared to DVH, CT, and MR-alone (p<0.05) and the SVM CT+MR model (p=0.03). The integration of baseline image features into prediction models has the potential to improve xerostomia risk stratification with the ultimate goal of personalized HNC radiotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540X (2019) https://doi.org/10.1117/12.2513809
Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precision Medicine, Correlative Analytics, and Translational Research
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540Y (2019) https://doi.org/10.1117/12.2513075
Currently the methods used to develop radiation therapy treatment plans for head and neck cancers rely on clinician experience and a small set of universal guidelines which result in inconsistent and variable methods. Data driven support can provide assistance to clinicians by reducing inconsistency associated with treatment planning and provide empirical estimates to minimize the radiation to healthy organs near the tumor. We created a database of DICOM RT objects which stores historical cases and when a new DICOM object is uploaded it will return a set of similar treatment plans to assist the clinician in creating the treatment plan for the current patient. The database works first by extracting features from DICOM RT object to quantitatively compare and evaluate the similarity of cases enabling the system to mine for cases with defined similarity. The feature extraction methods are based on the spatial relationships between the tumors and organs at risk which allows the generation the overlap volume histogram and spatial target similarity which demonstrate the volumetric and locational similarity between the organ at risk and the tumor. It is useful to find cases with similar tumor anatomy because this similarity translates to similarity in radiation dosage. The developed system was applied to three different RT sites, University of California Los Angeles, Technical University at Munich and State University of New York Buffalo; Roswell Park, with a total of 247 cases to evaluate the system for both inter- and intra- institutional best practices and results. Future roadmap will be discussed for correlating outcomes results to the decision support system which will enhance the overall performance and utilization of the decision support system in the RT workflow. In the future, because this database returns similar historical cases to a current one this could be a worthwhile decision support tool for clinicians as they create new radiation therapy treatment plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109540Z (2019) https://doi.org/10.1117/12.2515660
Case based reasoning (CBR) with image retrieval can be used to implement a clinical decision support system for supporting diagnosis of space occupying lesions . We present a case based image retrieval (CBIR) system to retrieve images with lesion similar to the input test image. Here we consider only glioblasoma and lung cancer lesions. The lung cancer lesions can be either nodules or cysts. A feature database has been created and the processing of a query is conducted in real time. By using bag of visual words (BOVW), histogram of features is compared with the codebook to retrieve similar images. The experiments performed at various levels retrieved relevant and similar images of lesion images with a mean average precision of 0.85. The system presented is expected aid and improve the effectiveness of diagnosis performed by radiologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095410 (2019) https://doi.org/10.1117/12.2515588
Advances in medical imaging technologies have led to the generation of large databases with high-resolution image volumes. To retrieve images with pathology similar to the one under examination, we propose a content- based image retrieval framework (CBIR) for medical image retrieval using deep Convolutional Neural Network (CNN). We present retrieval results for medical images using a pre-trained neural network, ResNet-18. A multi- modality dataset that contains twenty-three classes and four modalities including (Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Mammogram (MG), and Positron Emission Tomograph (PET)) are used for demonstrating our method. We obtain an average classification accuracy of 92% and the mean average precision of 0.90 for retrieval. The proposed method can assist in clinical diagnosis and training radiologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095411 (2019) https://doi.org/10.1117/12.2513097
Breast cancer is one of the most common malignant tumor in females. Adjuvant chemotherapy is a common method of breast cancer treatment, while not all patients will benefit from the treatment. The purpose of this study is to predict prognosis of breast cancer and stratify patients with high risk of recurrence by radiomic analysis based on dynamicenhanced magnetic resonance imaging (DCE-MRI) and gene expression data. We performed this study in three steps. First, a retrospective single-institution cohort of 61 patients with invasive breast cancer was enrolled. We extracted quantitative imaging features depicting tumor enhancement patterns and screened for those that were potentially prognostic for survival. Multivariate Cox regression analysis showed that image feature of inverse difference was independently associated with recurrence-free survival (RFS) with P value of 0.0371. Second, we built a regression model with 74-gene signature from 87 patients whose MRI and gene expression data were available for associating image features that is related with breast cancer prognosis identified in the first dataset. Finally, we validated the prognostic value of the established signature by applying it on public available genomic data sets. Using the 74-gene signature in the independent validation set, 1010 patients were divided into two groups to stratify patients for RFS (log-rank P=0.011) and overall survival (OS, log-rank P=0.029) among different survival risk levels. Our results showed that imaging features representing tumor biological characteristics would be valuable in predicting prognosis of breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095412 (2019) https://doi.org/10.1117/12.2511649
We are investigating the association of mammographic density with breast cancer occurrence. With IRB approval, we collected cases of women with screening-detected breast cancer and controls. A total of 2028 patients including 329 cases was collected from the screening cohort in our institution. An experienced MQSA radiologist retrospectively reviewed the earliest available digital mammograms (DMs) and assessed breast density in terms of BI-RADS categories and percent density (PD) estimated by interactive thresholding. Survival models were built based on BI-RADS categories and strata based on PD measures, respectively. Using the pairwise log-rank test, we observed a statistically significant difference at the 5% level between BI-RADS category A and C, category A and D, category B and C, & category B and D. Similarly, we found a significant difference between curves for women with <10% density and with 20-34% density, between women with <10% density and with ≥35% density, and between women with 10-19% density and with ≥35% density. A multivariate Cox proportional hazards model was constructed using backwards variable selection with age, BI-RADS density, PD strata, and PD as independent factors. At the 5% level, the results indicated that age and PD had statistically significant influences on occurrence time. With age serving as a borderline protective factor (regression coefficient < 0, hazard ratio HR=0.99, p=.0506), PD was a risk factor (regression coefficient < 0, hazard ratio HR=1.02, p=.0001) for breast cancer occurrence. Our results showed that breast density plays an important role in the risk and occurrence for breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095413 (2019) https://doi.org/10.1117/12.2512417
Recent developments in deep learning techniques have gained significant attention in medical image analysis. Deep learning techniques have been shown to give promising results for automating various medical image tasks like segmentation of organs, precise delineation of lesions and automated disease diagnosis.We have demonstrated the utility of deep learning models for finding associations between brain imaging phenotypes and the molecular subtype. In this study Magnetic Resonance (MR) images of the brain with Glioblastoma multiforme (GBM) were used. The Cancer Genome Atlas (TCGA) has grouped GBM into four distinct subtypes, namely - Mesenchymal, Neural, Proneural and Classical. The subtype classification are defined by genomic characteristics, survival outcomes, patient age and response to treatment. Identification of molecular subtype and its associated imaging phenotype could aid in developing precision medicine and personalized treatments for patients.The MR imaging data and molecular subtype information were retrospectively obtained from The Cancer Imaging Archive (TCIA) of patients with high-grade gliomas. From the TCIA, 123 patient cases were manually identified which had the following four MR sequences- a) T1 and b) post-contrast T1-weighted (T1c), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (FLAIR). The MR dataset was further split into 92 and 31 cases for training and testing. The pre-processing of MR images involved skull-stripping, co-registration of MR sequences to T1c, re-sampling of MR volumes to isotropic voxels and segmentation of brain lesion. The lesions in the MR volumes were automatically segmented using a trained convolutional Neural Network (CNN) on BraTS2017 segmentation challenge dataset. From the segmentation maps 64×64×64 cube patches centered around the tumor were extracted from all the four MR sequences and a 3D convolutional neural network was trained for the molecular subtype classification. On the held-out test set, our approach achieved a classification accuracy of 90%. These results on TCIA dataset highlight the emerging role of deep learning in understanding molecular markers from non-invasive imaging phenotypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095414 (2019) https://doi.org/10.1117/12.2512742
In particle therapy sub-millimeter sized heterogeneities like lung tissue cause a Bragg peak degradation, which should be considered in treatment planning to ensure an optimal dose distribution in tumor tissue. To determine the magnitude of this degradation extensive experiments could be carried out. More convenient and reproducible is the use of our mathematical model to describe the degradation properties of lung tissue and to design 3D-printable substitutes based on high-resolution CT images of human lung samples. High-resolution CT images of human lung samples (resolution: 4 μm) were used to create binary cubic datasets with voxels corresponding to either air or lung tissue. The number of tissue voxels is calculated along the z-axis for every lateral position. This represents the “tissue length” for all particle paths through the dataset of a parallel beam. The square based lung substitute is divided into columns with different heights corresponding to the occurring tissue lengths. The columns lateral extend complies with the quantity of the corresponding tissue lengths present in the dataset. The lung substitutes were validated by Monte Carlo simulations with the Monte Carlo toolkit TOPAS. The Monte Carlo simulations proved that the depth dose distributions and hence the Bragg peak degradations of the lung substitutes mimics the degradation of the corresponding lung tissue sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095415 (2019) https://doi.org/10.1117/12.2513155
In this study, we present a 3D-printing based realistic anthropomorphic dental phantom and its imaging evaluation. A real skull phantom was scanned with CBCT with high resolution, and then image segmentation and 3D modeling were carried out for bones, teeth, and soft tissue. Followed by was 3D printing of bones and teeth with gypsum, with additional 3 teeth being printed with metal separately. For soft tissue, a negative model was first printed with PMMA, and then silicon gel was casted into the negative model with printed bones and teeth set in place. The created phantom was scanned with by using an MDCT and dental CBCT scanner for image quality evaluation of CT images, panoramic images, and also metal artifacts. Our goal was to make our phantom to mimic the real skull phantom. Our proposed phantom’s CT, panoramic images look almost the same with the real skull phantom’s one. Mean HU of bone was comparable between 3D printed and real skull phantoms (1860 vs 1730), and mean HU of soft tissue was 40 in 3D printed dental phantom. Image quality of dental CT images assessed by an expert was comparable between real skull and 3D printed phantom. Especially, the metal artifacts from the metal printed teeth was rated as realistically mimicking the real crowned teeth. Our study demonstrated the feasibility of making 3D printing-based making realistic anthropomorphic phantoms which can be used in various dental imaging studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Posters: Artificial Intelligence and Deep Learning
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095416 (2019) https://doi.org/10.1117/12.2515609
Background: Lung cancer is one of the most common cancers in the United States and the most fatal, with 142,670 deaths in 2019. Accurately determining tumor response is critical to clinical treatment decisions, ultimately impacting patient survival. To better differentiate between non-small cell lung cancer (NSCLC) responders and non-responders to therapy, radiomic analysis is emerging as a promising approach to identify associated imaging features undetectable by the human eye. However, the plethora of variables extracted from an image may actually undermine the performance of computer-aided prognostic assessment, known as the curse of dimensionality. In the present study, we show that correlative-driven hierarchical clustering improves high-dimensional radiomics-based feature selection and dimensionality reduction, ultimately predicting overall survival in NSCLC patients. Methods: To select features for high-dimensional radiomics data, a correlation-incorporated hierarchical clustering algorithm automatically categorizes features into several groups. The truncation distance in the resulting dendrogram graph is used to control the categorization of the features, initiating low-rank dimensionality reduction in each cluster, and providing descriptive features for Cox proportional hazards (CPH)-based survival analysis. Using a publicly available non- NSCLC radiogenomic dataset of 204 patients’ CT images, 429 established radiomics features were extracted. Low-rank dimensionality reduction via principal component analysis (PCA) was employed (𝒌 = 𝟏, 𝒏 < 𝟏) to find the representative components of each cluster of features and calculate cluster robustness using the relative weighted consistency metric. Results: Hierarchical clustering categorized radiomic features into several groups without primary initialization of cluster numbers using the correlation distance metric (as a function) to truncate the resulting dendrogram into different distances. The dimensionality was reduced from 429 to 67 features (for truncation distance of 0.1). The robustness within the features in clusters was varied from -1.12 to -30.02 for truncation distances of 0.1 to 1.8, respectively, which indicated that the robustness decreases with increasing truncation distance when smaller number of feature classes (i.e., clusters) are selected. The best multivariate CPH survival model had a C-statistic of 0.71 for truncation distance of 0.1, outperforming conventional PCA approaches by 0.04, even when the same number of principal components was considered for feature dimensionality. Conclusions: Correlative hierarchical clustering algorithm truncation distance is directly associated with robustness of the clusters of features selected and can effectively reduce feature dimensionality while improving outcome prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095418 (2019) https://doi.org/10.1117/12.2512000
With the advent of computers and natural language processing, it is not surprising to see that humans are trying to use computers to answer questions. By the 1960s, there were systems implemented on the two major models of question answering, IR-based and knowledge-based, to answer questions about sport statistics and scientific facts. This paper reports on the development of a knowledge-based question answering system that is aimed at providing cognitive assistance to radiologists. Our system represents the question as a semantic query to a medical knowledge base. Evidence obtained from textual and imaging data associated with the question is then combined to arrive at an answer. This question answering system has 3 stages: i) question text and answer choices processing, ii) image processing, and iii) reasoning. Currently, the system can answer differential diagnosis and patient management questions, however, we can tackle a wider variety of question types by improving our medical knowledge coverage in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 1095419 (2019) https://doi.org/10.1117/12.2512466
We developed a novel 3D electronic cleansing (EC) method for CT colonography (CTC) based on a generative adversarial network (GAN). GANs are machine-learning algorithms that can be trained to translate an input image directly into a desired output image without using explicit manual annotations. A 3D-GAN EC scheme was developed by extending a 2D-pix2pix GAN model to volumetric CTC datasets based on 3D-convolutional kernels. To overcome the usual need for paired input-output training data, the 3D-GAN model was trained by use of a self-supervised learning scheme where the training data were constructed iteratively as a combination of volumes of interest (VOIs) from paired anthropomorphic colon phantom CTC datasets and input VOIs from the unseen clinical input CTC dataset where the virtually cleansed output sample pairs were self-generated by use of a progressive cleansing method. Our preliminary evaluation with a clinical fecal-tagging CTC case showed that the 3D-GAN EC scheme can substantially reduce the processing time and EC image artifacts in comparison to our previous deep-learning EC scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541A (2019) https://doi.org/10.1117/12.2512636
Automated segmentation of vertebral bone from diagnostic Computed Tomography (CT) images has become an important part of clinical workflow today. There is an increasing need for computer aided diagnosis applications of various spine disorders including scoliosis, fracture detection and even automated reporting. While modelbased methods have been widely used, recent deep Learning methods have shown a great potential in this area. However, choice of optimal configuration of the network to get the best segmentation performance is challenging. In this work, we explore the impact of different training and inference options, including dimensions, activation function, batch normalization, kernel size, filters, patch size and patch selection strategy in U-Net architecture. 20 publicly available CT Spine datasets from Spineweb repository was used in this study divided into training/test datasets. Training with different DL configurations were repeated with these datasets. We used the best weights corresponding to each configuration for inference on the independent test dataset. These results on the test dataset with the best weights for each configurations were compared. 3D models performed consistently better than 2D approaches. Overlapped patch based inference had a big impact on enhancing performance accuracy. The selection of training patch size was also found to be crucial in improving the model performance. Moreover, the need for an effective balance of positive and negative training patches was found. The best performance in our study was obtained by using overlapped patch inference, training with RELU activation and batch normalization in a 3D U-Net architecture with training patch size of 128×128×32 that resulted in average values of precision= 97%, sensitivity= 96% and F1 (Dice)= 96% for the test dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541C (2019) https://doi.org/10.1117/12.2512671
In the U.S., breast cancer is diagnosed in about 12% of women during their lifetime and it is the second leading reason for women’s death. Since early diagnosis could improve treatment outcomes and longer survival times for breast cancer patients, it is significant to develop breast cancer detection techniques. The Convolutional Neural Network (CNN) can extract features from images automatically and then perform classification. To train the CNN from scratch, however, requires a large number of labeled images, which is infeasible for some kinds of medical image data such as mammographic tumor images. In this paper, we proposed two solutions to the lack of training images. 1)To generate synthetic mammographic images for training by the Generative Adversarial Network (GAN). Adding GAN generated images made to train CNN from scratch successful and adding more GAN images improved CNN’s validation accuracy to at most (best) 98.85%. 2)To apply transfer learning in CNN. We used the pre-trained VGG-16 model to extract features from input mammograms and used these features to train a Neural Network (NN)-classifier. The stable average validation accuracy converged at about 91.48% for classifying abnormal vs. normal cases in the DDSM database. Then, we combined the two deep-learning based technologies together. That is to apply GAN for image augmentation and transfer learning in CNN for breast cancer detection. To the training set including real and GAN augmented images, although transfer learning model did not perform better than the CNN, the speed of training transfer learning model was about 10 times faster than CNN training. Adding GAN images can help training avoid over-fitting and image augmentation by GAN is necessary to train CNN classifiers from scratch. On the other hand, transfer learning is necessary to be applied for training on pure real images. To apply GAN to augment training images for training CNN classifier obtained the best classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541D (2019) https://doi.org/10.1117/12.2512887
Purpose To accurately segment organs from 3D CT image volumes using a 2D, multi-channel SegNet model consisting of a deep Convolutional Neural Network (CNN) encoder-decoder architecture. Method We trained a SegNet model on the extended cardiac-torso (XCAT) dataset, which was previously constructed based on patient Chest–Abdomen–Pelvis (CAP) Computed Tomography (CT) studies from 50 Duke patients. Each study consists of one low-resolution (5-mm section thickness) 3D CT image volume and its corresponding 3D, manually labeled volume. To improve modeling on such small sample size regime, we performed median frequency class balancing weighting in the loss function of the SegNet, data normalization adjusting for intensity coverage of CT volumes, data transformation to harmonize voxel resolution, CT section extrapolation to virtually increase the number of transverse sections available as inputs to the 2D multi-channel model, and data augmentation to simulate mildly rotated volumes. To assess model performance, we calculated Dice coefficients on a held-out test set, as well as qualitative evaluation of segmentation on high-resolution CTs. Further, we incorporated 50 patients high-resolution CTs with manually-labeled kidney segmentation masks for the purpose of quantitatively evaluating the performance of our XCAT trained segmentation model. The entire study was conducted from raw, identifiable data within the Duke Protected Analytics Computing Environment (PACE). Result We achieved median Dice coefficients over 0.8 for most organs and structures on XCAT test instances and observed good performance on additional images without manual segmentation labels, qualitatively evaluated by Duke Radiology experts. Moreover, we achieved 0.89 median Dice Coefficients for kidneys on high-resolution CTs. Conclusion 2D, multi-channel models like SegNet are effective for organ segmentations of 3D CT image volumes, achieving high segmentation accuracies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Posters: Precision Medicine, Correlative Analytics, and Translational Research
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541E (2019) https://doi.org/10.1117/12.2512058
We developed a novel survival analysis model for images, called pix2surv, based on a conditional generative adversarial network (cGAN). The performance of the model was evaluated in the prediction of the overall survival of patients with rheumatoid arthritis-associated interstitial lung disease (RA-ILD) based on the radiomic 4D-curvature of lung CT images. The architecture of the pix2surv model is based on that of a pix2pix cGAN, in which a generator is configured to generate an estimated survival time image from an input radiomic image of a patient, and a discriminator attempts to differentiate the “fake pair” of the input radiomic image and a generated survival-time image from a “true pair” of the input radiomic image and the observed survival-time image of the patient. For evaluation, we retrospectively identified 71 RA-ILD patients with lung CT images and pulmonary function tests. The 4D-curvature images computed from the CT images were subjected to the pix2surv model for evaluation of their predictive performance with that of an established clinical prognostic biomarker known as the GAP index. Also, principal-curvature images and average principal curvatures were individually subjected, in place of the 4D-curvature images, to the pix2surv model for performance comparison. The evaluation was performed by use of bootstrapping with concordance index (C-index) and relative absolute error (RAE) as metrics of prediction performance. Preliminary result showed that the use of 4D-curvature images yielded C-index and RAE values that statistically significantly outperformed the use of the clinical biomarker as well as the other radiomic images and features, indicating the effectiveness of 4D-curvature images with pix2surv as a prognostic imaging biomarker for the survival of patients with RA-ILD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541F (2019) https://doi.org/10.1117/12.2512236
Currently, mammography is the only population based breast cancer screening modality. In order to improve efficacy of mammography and increase cancer detection yield, it has been recently attracting extensive research interest to identify new mammographic imaging markers and/or develop novel machine learning models to more accurately assess or predict short-term breast cancer risk. Objective of this study is to explore and test a new quantitative image marker based on the analysis of frequency domain correlation based features between the bilateral asymmetry of image characteristics to predict risk of women having or developing mammography detectable cancer in a short-term. For this purpose, we assembled an image dataset involving 1,042 sets of “prior” negative mammograms. In the next subsequent “current” mammography screening, 402 cases were positive with cancer detected and verified, while 642 cases remained negative. A special computer-aided detection (CAD) scheme was applied to pre-process two bilateral mammograms of the left and right breasts, generate image maps in frequency domain, compute image features, and apply a multi-feature fusion based support vector machine based classifier to predict short-term breast cancer risk. By using a 10-fold crossvalidation method, this CAD based risk model yielded a performance of AUC = 0.72±0.04 (area under a ROC curve) and an odds ratio of 5.92 with 95% confidence interval of [4.32, 8.11]. This study presented a new type of mammographic imaging marker or a machine learning prediction model and demonstrated its feasibility to help predict short-term risk of developing breast cancer using a large and diverse image dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541G (2019) https://doi.org/10.1117/12.2513095
Breast cancer histological grade and lymph node status are important in evaluating the prognosis of patients. This study aim to predict these factors by analyzing the heterogeneity of tumor and its adjacent stroma based on dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) and diffusion-weighted imaging (DWI). A dataset of 172 patients with surgically verified lymph node status (positive lymph nodes, n=62; negative lymph nodes, n=110) who underwent preoperative DCE-MRI and DWI examination was collected. Among them, 144 cases had available histological grade information, including 56 cases of low-grade (grade 1 and 2), and 88 samples of high-grade (grade 3). To this end, we identified six tumor subregions on DCE-MRI as well as the corresponding subregions on ADC according to their distances to the tumor boundary. The statistical and Haralick texture features were extracted in each subregion, based on which predictive models were built to predict histological grade and lymph node status in breast cancer. An area under a receiver operating characteristic curve (AUC) was computed with a leave-one-out cross-validation (LOOCV) method to assess each classifier’s performance. For histological grade prediction, the classifier using DCE-MRI features in the inner tumor achieved best performance among all the subregions with AUC of 0.859. For lymph node status, classifier based on DCE-MRI features from tumor subregion of proximal peritumoral stromal shell obtained highest AUC of 0.882 among all the regions. Furthermore, the predictions from DCE-MRI and DWI were fused, and the AUC value was increased to 0.895 for discriminating histological grade. Our results demonstrate that DCE-MRI and ADC imaging features are complementary in predicting histological grade in breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, 109541H (2019) https://doi.org/10.1117/12.2513126
Breast cancer is one of the most common malignant tumors in women. The purpose of this study was to predict the histological grade of breast cancer using features extracted from dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) and diffusion weighted imaging (DWI). In this study, we collected 144 cases of breast invasive ductal carcinoma, which consists of 76 who were high-grade malignant (Grade 3) and 68 mediate-grade malignant (Grade 2) breast cancers. Preoperative breast DW and DCE-MR examination were performed using a 3T MR scanner. Breast tumor segmentation was performed on all of the image series. After that, image features of texture, statistic, and morphological features of breast tumor were extracted on both the DW and DCE-MR images. The classification model was established on these images respectively, and the classifiers of single-parametric image were fused for prediction. In order to evaluate the classifier performance, the area under the receiver operating characteristic curve (AUC) was calculated in a leave-oneout cross-validation (LOOCV) analysis. The predictive model based on DCE-MRI generated an AUC of 0.829 with the sensitivity and specificity of 0.868 and 0.676 respectively, while that based on DWI generated an AUC of 0.783 with the sensitivity and specificity of 0.842 and 0.676 respectively. After multi-classifier fusion using features both from the DWI and DCE-MRI, the classification performance was increased to AUC of 0.844±0.067 with the sensitivity and specificity of 0.908 and 0.735 respectively. Our results showed that, compared with each single parametric image alone, the performance of the classifier could be improved by combining features of DCE-MRI and DWI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.