KEYWORDS: Lung, Data modeling, Computed tomography, Scanners, 3D modeling, Computer simulations, Modulation transfer functions, Chest imaging, Medicine, Medical research
Virtual Imaging Trials, known as VITs, provide a computational substitute for clinical trials. These traditional trials tend to be sluggish, costly, and frequently deficient in definitive evidence, all the while subjecting participants to ionizing radiation. Our VIT platform meticulously mimics essential components of the imaging process, encompassing everything from virtual patients and scanners to simulated readers. Within the scope of this intended research, we aim to authenticate our virtual imaging trial platform by duplicating the results of the National Lung Screening Trial (NLST) for lung cancer screening through the emulation of low-dose computed tomography (CT) and chest radiography (CXR) procedures. The methodology involves creating 66 unique computational phantoms, each with inserted simulated lung nodules. Replicating NLST CT imaging via Duke Legacy W20 scanner matched essential properties. Virtual imaging was done through DukeSim. A LUNA16-trained virtual reader, combining a 3D RetinaNet model (front-end) with a ResNet-10 false positive reduction model (back-end), evaluated the virtually imaged data, ensuring rigorous assessment. The back-end model achieved a sensitivity of over 95% at fewer than 3 false positives per scan for both the clinical and virtual imaged CTs. Notably, nodule diameter-based analysis showcases even higher sensitivity for nodules measuring 10 mm or more. In conclusion, the integration of diverse computational and imaging techniques, culminating in a virtual reader, demonstrates promising sensitivity. To capture both arms of the trial, future research will compare virtual reader performance on CT with CXR. This affirms the transformative potential of virtual imaging trials in advancing evidence-based medicine, offering an efficient and ethically conscious approach to medical research and development.
We develop the XCAT series of phantoms for medical imaging research. The phantoms model different individuals over various ages, heights, and weights, but a current drawback is they do not include small intestine variability. Each phantom has a small intestine derived from a common anatomical template due to the difficulty of fitting a regular tubular model to patient segmentations. Building upon previous work, we develop a software pipeline to add realistic variability in the phantoms by generating tubular small intestine surface models with random length and diameter fit within the constraints of a given intestine segmentation. The pipeline first alpha wraps a given segmentation into a 3D mesh. Using a random walk algorithm, a path is then constructed within the constraints of the surface mesh while avoiding both path and surface collisions given a set diameter, start point, and end point that bounds the random walk of the passage line. After generating the passage line, the program smooths the path by fitting a cubic spline curve to it. A cubic NURBS cylinder is lofted along the path to create an initial target model. The cylinder is then grown radially, avoiding self-intersection and bounded by the mesh surface, to achieve a user-defined volume. The pipeline was tested on 45 sets of patient CT data. From the results, we find we can generate variable mean passage lengths and mean diameters within realistic ranges found for the general population.
Deep learning methods have performed superiorly to segment organs of interest from Computed Tomography images than traditional methods. However, the trained models do not generalize well at the inference phase, and manual validation and correction are not feasible for large-scale studies. Therefore, automatic methods to detect segmentation failure are crucial for Computer Aided Diagnosis systems. In this work, we present an automatic quality control method that can be used to reject poor segmentation. We register new test cases against a set of XCATreference or training images. This “reverse classification accuracy” approach uses similarity of image registration to estimate segmentation quality. We validated this approach on two large public CT datasets, CT-ORG and ABDOMEN-1K with multiple organs. We show empirical cutoffs for predicted similarity coefficient for organs of interest in public datasets that can be used for datasets where ground truth is not available.
Many published studies use deep learning models to predict COVID-19 from chest x-ray (CXR) images, often reporting high performances. However, the models do not generalize well on independent external testing. Common limitations include the lack of medical imaging data and disease labels, leading to training on small datasets or drawing classes from different institutions. To address these concerns, we designed an external validation study of deep learning classifiers for COVID-19 in CXR images including XCAT phantoms as well. We hypothesize that a simulated CXR image dataset obtained from the XCAT phantom allows for better control of the dataset including pixel-level ground truth. This setup allows for multiple advantages: First, we can validate the publicly available models using simulated chest x-rays. Secondly, we can also address clinically relevant questions with this setup such as effect of dose levels and sizeof COVID-19 pneumonia in performance of deep learning classifier. We
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.