28 January 2021 DeepAMO: a multi-slice, multi-view anthropomorphic model observer for visual detection tasks performed on volume images
Ye Li, Junyu Chen, Justin L. Brown, S. Ted Treves, Xinhua Cao, Frederic H. Fahey, George Sgouros, Wesley E. Bolch, Eric C. Frey
Author Affiliations +
Funded by: National Institute of Biomedical Imaging and Bioengineering (NIBIB)
Abstract

Purpose: We propose a deep learning-based anthropomorphic model observer (DeepAMO) for image quality evaluation of multi-orientation, multi-slice image sets with respect to a clinically realistic 3D defect detection task.

Approach: The DeepAMO is developed based on a hypothetical model of the decision process of a human reader performing a detection task using a 3D volume. The DeepAMO is comprised of three sequential stages: defect segmentation, defect confirmation (DC), and rating value inference. The input to the DeepAMO is a composite image, typical of that used to view 3D volumes in clinical practice. The output is a rating value designed to reproduce a human observer’s defect detection performance. In stages 2 and 3, we propose: (1) a projection-based DC block that confirms defect presence in two 2D orthogonal orientations and (2) a calibration method that “learns” the mapping from the features of stage 2 to the distribution of observer ratings from the human observer rating data (thus modeling inter- or intraobserver variability) using a mixture density network. We implemented and evaluated the DeepAMO in the context of  Tc99m-DMSA SPECT imaging. A human observer study was conducted, with two medical imaging physics graduate students serving as observers. A 5  ×  2-fold cross-validation experiment was conducted to test the statistical equivalence in defect detection performance between the DeepAMO and the human observer. We also compared the performance of the DeepAMO to an unoptimized implementation of a scanning linear discriminant observer (SLDO).

Results: The results show that the DeepAMO’s and human observer’s performances on unseen images were statistically equivalent with a margin of difference (ΔAUC) of 0.0426 at p  <  0.05, using 288 training images. A limited implementation of an SLDO had a substantially higher AUC (0.99) compared to the DeepAMO and human observer.

Conclusion: The results show that the DeepAMO has the potential to reproduce the absolute performance, and not just the relative ranking of human observers on a clinically realistic defect detection task, and that building conceptual components of the human reading process into deep learning-based models can allow training of these models in settings where limited training images are available.

© 2021 Society of Photo-Optical Instrumentation Engineers (SPIE) 2329-4302/2021/$28.00 © 2021 SPIE
Ye Li, Junyu Chen, Justin L. Brown, S. Ted Treves, Xinhua Cao, Frederic H. Fahey, George Sgouros, Wesley E. Bolch, and Eric C. Frey "DeepAMO: a multi-slice, multi-view anthropomorphic model observer for visual detection tasks performed on volume images," Journal of Medical Imaging 8(4), 041204 (28 January 2021). https://doi.org/10.1117/1.JMI.8.4.041204
Received: 2 June 2020; Accepted: 31 December 2020; Published: 28 January 2021
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Performance modeling

Image segmentation

Data modeling

Defect detection

3D image processing

Computer simulations

Back to Top