Automated detection and aggressiveness classification of prostate cancer on Magnetic Resonance Imaging (MRI) can help standardize radiologist interpretations, and guide MRI-Ultrasound fusion biopsies. Existing automated methods rely on MRI features alone, disregarding histopathology image information. Histopathology images contain definitive information about the presence, extent, and aggressiveness of cancer. We present a two-step radiology-pathology fusion model, ArtHiFy, Artificial Histopathology-style Features for improving MRI-based prostate cancer detection, that leverages generative models in a multimodal co-learning strategy, enabling learning from resource-rich histopathology, but prediction using resource-poor MRI alone. In the first step, ArtHiFy generates artificial low-resolution histopathology-style features from MRI using a modified Geometry-consistent Generative Adversarial Network (GcGAN). The generated low-resolution histopathology-style features emphasize cancer regions as having less texture variations, mimicking densely packed nuclei in real histopathology images. In the second step, ArtHiFy uses these generated artificial histopathology-style features in addition to MR images in a convolutional neural network architecture to detect and localize aggressive and indolent prostate cancer on MRI. ArtHiFy does not require spatial alignment between MRI and histopathology images during training, and it does not require histopathology images at all during inference, making it clinically relevant for MRI-based prostate cancer diagnosis in new patients. We trained ArtHiFy using prostate cancer patients who underwent radical prostatectomy, and evaluated it on patients with and without prostate cancer. Our experiments showed that ArtHiFy improved prostate cancer detection performance over existing top performing prostate cancer detection models, with statistically significant differences.
Automated detection of aggressive prostate cancer on Magnetic Resonance Imaging (MRI) can help guide targeted biopsies and reduce unnecessary invasive biopsies. However, automated methods of prostate cancer detection often have a sensitivity-specificity trade-off (high sensitivity with low specificity or vice-versa), making them unsuitable for clinical use. Here, we study the utility of integrating prior information about the zonal distribution of prostate cancers with a radiology-pathology fusion model in reliably identifying aggressive and indolent prostate cancers on MRI. Our approach has two steps: 1) training a radiology-pathology fusion model that learns pathomic MRI biomarkers (MRI features correlated with pathology features) and uses them to selectively identify aggressive and indolent cancers, and 2) post-processing the predictions using zonal priors in a novel optimized Bayes’ decision framework. We compare this approach with other approaches that incorporate zonal priors during training. We use a cohort of 74 radical prostatectomy patients as our training set, and two cohorts of 30 radical prostatectomy patients and 53 biopsy patients as our test sets. Our rad-path-zonal fusion-approach achieves cancer lesion-level sensitivities of 0.77±0.29 and 0.79±0.38, and specificities of 0.79±0.23 and 0.62±0.27 on the two test sets respectively, compared to baseline sensitivities of 0.91±0.27 and 0.94±0.21 and specificities of 0.39±0.33 and 0.14±0.19, verifying its utility in achieving balance between sensitivity and specificity of lesion detection.
Prostate MRI is increasingly used to help localize and target prostate cancer. Yet, the subtle differences in MRI appearance of cancer compared to normal tissue renders MRI interpretation challenging. Deep learning methods hold promise in automating the detection of prostate cancer on MRI, however such approaches require large, well-curated datasets. Although existing methods that employed fully convolutional neural networks have shown promising results, the lack of labeled data can reduce the generalization of these models. Self-supervised learning provides a promising avenue to learn semantic features from unlabeled data. In this study, we apply the self-supervised strategy of image context restoration to detect prostate cancer on MRI and show this improves model performance for two different architectures (U-Net and Holistically Nested Edge Detector) compared to their purely supervised counterparts. We train our models on MRI exams from 381 men with biopsy confirmed cancer. Our study showed self-supervised models outperform randomly initialized models on an independent test set in a variety of training settings. We performed 3 experiments, where we trained with 5%, 25% and 100% of our labeled data, and observed that the U-Net based pre-training and downstream task outperformed other models. We observed the best improvements when training with 5% of the labeled training data, our selfsupervised U-Nets improve per-pixel Area Under the Curve (AUC, 0.71 vs 0.83) and Dice Similarity coefficient (0.19 vs 0.53). When training with 100% of the data, our U-Net-based pretraining and detection achieved an AUC of 0.85 and Dice similarity coefficient of 0.57.
The use of magnetic resonance-ultrasound fusion targeted biopsy improves diagnosis of aggressive prostate cancer. Fusion of ultrasound & magnetic resonance images (MRI) requires accurate prostate segmentations. In this paper, we developed a 2.5 dimensional deep learning model, ProGNet, to segment the prostate on T2-weighted magnetic resonance imaging (MRI). ProGNet is an optimized U-Net model that weighs three adjacent slices in each MRI sequence to segment the prostate in a 2.5D context. We trained ProGNet on 529 cases where experts annotated the whole gland (WG) on axial T2-weighted MRI prior to targeted prostate biopsy. In 132 cases, experts also annotated the central gland (CG) on MRI. After five-fold cross-validation, we found that for WG segmentation, ProGNet had a mean Dice similarity coefficient (DSC) of 0.91±0.02, sensitivity of 0.89±0.03, specificity of 0.97±0.00, and an accuracy of 0.95±0.01. For CG segmentation, ProGNet achieved a mean DSC 0.86±0.01, sensitivity of 0.84±0.03, specificity of 0.99±0.01, and an accuracy of 0.96±0.01. We then tested the generalizability of the model on the 60-case NCI-ISBI 2013 challenge dataset and on a local, independent 61-case test set. We achieved DSCs of 0.81±0.02 and 0.72±0.02 for WG and CG segmentation on the NCI-ISBI 2013 challenge dataset, and 0.83±0.01 and 0.75±0.01 for WG and CG segmentation on the local dataset. Model performance was excellent and outperformed state-of-art U-Net and holistically-nested edge detector (HED) networks in all three datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.