Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.
Quantitative perfusion maps of cerebral blood flow, cerebral blood volume, and transit time generated using dynamic imaging data enable physicians to evaluate and prescribe the optimal plan of care for stroke patients. Validation is needed to increase the accuracy and reproducibility of this data, which can vary depending on the scanning technique and post-processing algorithm. In this work, we expand the XCAT brain phantom to incorporate a realistic model of the contrast agent dynamics in the cerebral vasculature and establish the ground truth to which the perfusion maps can be compared. For a specific stroke case, each tissue region's flow demand is calculated and used to determine the feeding flow in the upstream arteries and draining flow in the downstream veins. Using the flow values and a contrast agent injection curve, the model calculates the input and output concentration curves for each structure in the brain. The concentration curve within each structure is then calculated as the difference between the total amount of contrast agent that has entered and exited the structure up until that timepoint. A dynamic simulation framework utilizes the curves to define the contrast agent concentration within the phantom at each time point and generates simulated CT perfusion imaging data sets compatible with commercially available post-processing software. This development provides a realistic set of ground truth test data that enables quantitative validation and optimization of perfusion imaging and post-processing methods for stroke assessment.
KEYWORDS: Digital breast tomosynthesis, Mammography, Image processing, Denoising, Clinical trials, Computer simulations, Breast, Gaussian filters, Medical imaging, Monte Carlo methods
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.
Noise simulation methods for computed tomography (CT) scans are powerful tools for assessing image quality at a range of doses without compromising patient care. Current state of the art methods to simulate lower-dose images from standard-dose images insert Poisson or Gaussian noise in the raw projection data; however, these methods are not always feasible. The objective of this work was to develop an efficient tool to insert realistic, spatially correlated, locally varying noise to CT images in the image-domain utilizing information from the image to estimate the local noise power spectrum (NPS) and variance map. In this approach, normally distributed noise is filtered using the inverse Fourier transform of the square root of the estimated NPS to generate noise with the appropriate spatial correlation. The noise is element-wise multiplied by the standard deviation map to produce locally varying noise and is added to the noiseless or high-dose image. Results comparing the insertion of noise in the projection-domain versus the proposed insertion of noise in the image-domain demonstrate excellent agreement. While this image-domain method will never replace projection-domain methods, it shows promise as an alternative for tasks where projection-domain methods are not practical, such as the case for conducting large-scale studies utilizing hundreds of noise realizations or when the raw data is not available.
In large part from concerns about radiation exposure from computed tomography (CT), iterative reconstruction (IR) has emerged as a popular technique for dose reduction. Although IR clearly reduces image noise and improves resolution, its ability to maintain or improve low-contrast detectability over (possibly post-processed) filtered backprojection (FBP) reconstructions is unclear. In this work, we have scanned a low contrast phantom encased in an acrylic oval using two vendors’ scanners at 120 kVp at three dose levels for axial and helical acquisitions with and without automatic exposure control. Using the local noise power spectra of the FBP and IR images to guide the filter design, we developed a two-dimensional angularly-dependent Gaussian filter in the frequency domain that can be optimized to minimize the root-mean-square error between the image-domain filtered FBP and IR reconstructions. The filter is extended to three-dimensions by applying a through-slice Gaussian filter in the image domain. Using this three-dimensional, non-isotropic filtering approach on data with non-uniform statistics from both scanners, we were able to process the FBP reconstructions to closely match the low-contrast performance of IR images reconstructed from the same raw data. From this, we conclude that most or all of the benefits of noise reduction and low-contrast performance of advanced reconstruction can be achieved with adaptive linear filtering of FBP reconstructions in the image domain.
Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.
Physicians rely on CT Perfusion (CTP) images and quantitative image data, including cerebral blood flow, cerebral blood volume, and bolus arrival delay, to diagnose and treat stroke patients. However, the quantification of these metrics may vary depending on the computational method used. Therefore, we have developed a dynamic and realistic digital brain phantom upon which CTP scans can be simulated based on a set of ground truth scenarios. Building upon the previously developed 4D extended cardiac-torso (XCAT) phantom containing a highly detailed brain model, this work consisted of expanding the intricate vasculature by semi-automatically segmenting existing MRA data and fitting nonuniform rational B-spline surfaces to the new vessels. Using time attenuation curves input by the user as reference, the contrast enhancement in the vessels changes dynamically. At each time point, the iodine concentration in the arteries and veins is calculated from the curves and the material composition of the blood changes to reflect the expected values. CatSim, a CT system simulator, generates simulated data sets of this dynamic digital phantom which can be further analyzed to validate CTP studies and post-processing methods. The development of this dynamic and realistic digital phantom provides a valuable resource with which current uncertainties and controversies surrounding the quantitative computations generated from CTP data can be examined and resolved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.