Whole-brain high temporal resolution CT perfusion (CTP) is now feasible with wide-detector row CT scanners, but the optimal dose distribution of dynamic images remains unknown. In this study, we investigated the accuracy of perfusion parameters estimated in digital perfusion phantoms generated at various temporal resolutions with fixed scan dose. In accordance with CTP guidelines, simulated dose was set to a time-density curve (TDC) noise of 10 HU at a sampling interval of 2.0 s over 60 s, and higher temporal resolutions of 1.0 and 0.5 s intervals were investigated at 14 and 20 HU of noise, respectively. Monte Carlo simulations with known ground truth perfusion were conducted to test the performance of model-independent and model-dependent deconvolution algorithms as a function of temporal resolution at isodose. Tissue TDCs were simulated by convolving gamma-variate, linear or boxcar residue functions with a patient arterial TDC before adding Gaussian noise at the appropriate level then sampling at the investigated temporal resolutions. A digital brain perfusion phantom with physiological ground truth perfusion was similarly investigated. Only cerebral blood flow (CBF) estimates with the model-dependent algorithm marginally improved at higher temporal resolution as indicated by mean absolute error (MAE; 7.1±4.6 ml/min/100 g at 0.5 s, 9.6±6.0 ml/min/100 g at 2.0 s) but not with the modelindependent algorithm (MAE: 11.6±11.4 ml/min/100 g at 0.5 s, 11.3±11.7 ml/min/100 g at 2.0 s). Higher temporal resolution did not improve parameter estimation in the brain perfusion phantom. For the investigated temporal resolutions and simulated CTP dose, dose distribution appears negligible.
|