KEYWORDS: Computed tomography, Data modeling, Lung, Education and training, Viruses, Pulmonary disorders, Deep learning, Medical imaging, Image segmentation, Image enhancement
Agile development of reliable and accurate segmentation models during an infectious disease outbreak has the potential to reduce the need for already-strained human expertise. Global research and data-sharing efforts during the COVID-19 pandemic have shown how rapidly Deep-Learning (DL) models can be developed when public datasets are available for training. However, these efforts have been rare, usually limited by the unavailability of Computed Tomography (CT) imaging datasets from patients in the clinical setting. In the absence of human data, animal models faithful to human disease are used to investigate the imaging phenotype of high-consequence and emerging pathogens. As simultaneous access to both human and Nonhuman Primate (NHP) data for the same respiratory infection is unusual, we were interested in whether the inclusion of NHP data might enhance DL image segmentation of lung lesions associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Thus, we set out to evaluate DL performance and generalizability to a human test set. We found that combining human and NHP data and utilizing pretrained NHP models to initialize model training outperformed a model trained solely on human CT data. By studying the interaction between human and NHP CT imaging in developing these models, we can assess the potential value of NHP datasets for known or novel viruses that emerge in settings where medical imaging capacity is limited. Understanding and leveraging NHP datasets to improve the agility and quality of model development capabilities could better prepare us to respond to disease outbreaks in the human population.
Signal separability is an important factor in the differentiation of materials in spectral computed tomography. In this work, we evaluated the separability of two such materials, iodine and gadolinium with k-edges of 33.1 keV and 50.2 keV, respectively, with an investigational photon-counting CT scanner (Siemens, Germany). A 20 cm water equivalent phantom containing vials of iodine and gadolinium was imaged. Two datasets were generated by either varying the amount of contrast (iodine – 0.125-10 mg/mL, gadolinium 0.125-12 mg/mL) or by varying the tube current (50-300 mAs). Regions of interest were drawn within vials and then used to construct multivariate Gaussian models of signal. We evaluated three separation metrics using the Gaussian models: the area under the curve (AUC) of the receiver operating characteristic curve, the mean Mahalanobis distance, and the Jaccard index. For the dataset with varying contrast, all three metrics showed similar trends by indicating a higher separability when there was a large difference in signal magnitude between iodine and gadolinium. For the dataset with varying tube current, AUC showed the least variation due to change in noise condition and had a higher coefficient of determination (0.99, 0.97) than either mean Mahalanobis distance (0.69, 0.62) or Jaccard index (0.80, 0.75) when compared to material decomposition results for iodine or gadolinium respectively.
In this work, we define a theoretical approach to characterizing the signal-to-noise ratio (SNR) of multi-channeled systems such as spectral computed tomography image series. Spectral image datasets encompass multiple near-simultaneous acquisitions that share information. The conventional definition of SNR is applicable to a single image and thus does not account for the interaction of information between images in a series. We propose an extension of the conventional SNR definition into a multivariate space where each image in the series is treated as a separate information channel thus defining a spectral SNR matrix. We apply this to the specific case of contrast-to-noise ratio (CNR). This matrix is able to account for the conventional CNR of each image in the series as well as a covariance weighted CNR (Cov-CNR), which accounts for the covariance between two images in the series. We evaluate this experimentally with data from an investigational photon-counting CT scanner (Siemens).
This pilot study performs texture analysis on multiple magnetic resonance (MR) images of common renal masses for differentiation of renal cell carcinoma (RCC). Bounding boxes are drawn around each mass on one axial slice in T1 delayed sequence to use for feature extraction and classification. All sequences (T1 delayed, venous, arterial, pre-contrast phases, T2, and T2 fat saturated sequences) are co-registered and texture features are extracted from each sequence simultaneously. Random forest is used to construct models to classify lesions on 96 normal regions, 87 clear cell RCCs, 8 papillary RCCs, and 21 renal oncocytomas; ground truths are verified through pathology reports.
The highest performance is seen in random forest model when data from all sequences are used in conjunction, achieving an overall classification accuracy of 83.7%. When using data from one single sequence, the overall accuracies achieved for T1 delayed, venous, arterial, and pre-contrast phase, T2, and T2 fat saturated were 79.1%, 70.5%, 56.2%, 61.0%, 60.0%, and 44.8%, respectively. This demonstrates promising results of utilizing intensity information from multiple MR sequences for accurate classification of renal masses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.