Cardiac CT (CCT) is of vital importance in heart disease diagnosis but is conventionally limited by its complex workflow that requires dedicated phase and bolus monitoring devices [e.g., electrocardiogram (ECG) gating]. Our previous work has demonstrated the possibility of replacing ECG devices with deep learning (DL)-based monitoring of continuously acquired pulsed mode projections (PMPs, i.e., only a few sparsely sampled projections per gantry rotation). In this work, we report the development of a new projection domain DL-based cardiac phase estimation method that uses ensemble learning [i.e., training multiple convolution neural network (CNN) in parallel] to estimate and reduce DL uncertainty. The estimated DL uncertainty information was then used drive an analytical regularizer in a principled time-dependent manner (i.e., stronger regularization when DL uncertainty is higher). Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac scanning in a robust (i.e., reduced uncertainty) manner without ECG and bolus timing devices.
Modern CT enables fast volumetric helical acquisition with collimations up to 80 mm. These fast acquisitions are desirable to reduce patient time in the scanner and thus the likelihood of motion during the scan, especially for pediatric patients. Traditional approximate analytic reconstruction methods produce cone-beam artifacts in wide-coverage helical scan modes. These artifacts limit the clinical utility of wide-coverage acquisitions, as lower collimations are often selected to reduce the influence of these artifacts. Here, we develop and demonstrate the merits of a fast and effective analytic method for helical cone-beam artifact reduction (CBAR) which is suitable for clinical CT reconstruction. The hybrid reconstruction method described here reconstructs two image volumes: one with lower image noise and one with lower levels of cone beam artifact. The images are then combined using a two-dimensional Fourier blending approach. We demonstrate the methods effectiveness using phantoms (uniform and anthropomorphic) and clinical image data from head and neck exams in which these artifacts are most visible. When compared with traditional weighting, helical CBAR exhibited: a quantitative reduction in cone-beam artifacts, comparable noise values in uniform water phantoms, and a Likert-score increase from 2.8/5 to 4.1/5 (over a range of neurological CT scan types, N = 9). In conclusion, the frequency-blending hybrid reconstruction for helical CBAR has been demonstrated to be both fast and effective, providing higher diagnostic confidence when reading images from wide-coverage acquisitions and potentially enabling more frequent use of shorter-duration, wide-coverage helical scan modes such as 80 mm collimation.
Cardiac CT plays an important role in diagnosing heart diseases but is conventionally limited by its complex workflow that requires dedicated phase and bolus tracking [e.g., electrocardiogram (ECG) gating]. This work reports initial progress towards robust and autonomous cardiac CT exams through deep learning (DL) analysis of pulsed-mode projections (PMPs). To this end, cardiac phase and its uncertainty were simultaneously estimated using a novel projection domain cardiac phase estimation network (PhaseNet), which utilizes a sliding-window multi-channel feature extraction approach and a long short-term memory (LSTM) block to extract temporal correlation between time-distributed PMPs. Monte-Carlo dropout layers were utilized to predict the uncertainty of deep learning-based cardiac phase prediction. The performance of the proposed phase estimation pipeline was evaluated using accurate physics-based emulated data.
PhaseNet demonstrated improved phase estimation accuracy compared to more standard methods in terms of RMSE (~43% improvement vs. a standard CNN-LSTM; ~17% improvement vs. a multi-channel residual network [ResNet]), achieving accurate phase estimation with <8% RMSE in cardiac phase (phase ranges from 0-100%). These findings suggest that the cardiac phase can be accurately estimated with the proposed projection domain approach. Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac CT scanning without ECG device or expert-in-the-loop bolus timing.
Cardiac CT exams are some of the most complex CT exams due to the need to carefully time the scan to capture the heart during a quiescent cardiac phase and when the intravenous contrast bolus is at its peak concentration in the left and/or right heart. We are interested in developing a robust and autonomous cardiac CT exam, using deep learning approaches to extract contrast and cardiac phase timing directly from projections. In this paper, we present a new approach to estimate contrast bolus timing directly from a sparse set of CT projections. We present a deep learning approach to estimate contrast agent concentration in left and right sides of the heart directly from a set of projections. We use a virtual imaging framework to generate training and test data, derived from real patient datasets. We finally combine this with a simple analytical approach to decide on the start of the cardiac CT exam.
Cardiac CT is a safe, accurate, non-invasive method widely employed for diagnosis of coronary artery disease (CAD) and planning therapeutic interventions. Even with state-of-the-art CT technology, calcium blooming artifacts may limit the accuracy of coronary stenosis assessment. A variety of solutions to reduce blooming artifacts have been proposed, including hardware improvements, protocol optimizations, and software deblooming techniques [1-6]. Hardware developments and clinical studies (for protocol optimization or training data generation) can be expensive, time-consuming, and impractical. Hence, there is an opportunity for a Virtual Clinical Trial (VCT) framework [7-8] to help researchers to evaluate the impact of various solutions on calcium blooming and to create training datasets for developing deep learning solutions for deblooming. In this paper, we present a new VCT framework for generating cardiac CT images with calcium blooming with a variety of CT hardware parameters, CT scan protocols and CT reconstruction kernels. As an example, we use the VCT framework to investigate the impact of three common scan and reconstruction parameters (X-ray tube voltage, focal spot size, and reconstruction kernel) on calcium blooming artifacts. We conclude that tube voltage and reconstruction kernel have the most direct impact on calcium blooming, which is consistent with earlier clinical reports [9-12].
Cardiac CT exams are some of the most complex CT exams due to need to carefully time the scan to capture the heart during the quiescent cardiac phase and when the contrast bolus is at its peak concentration. We are interested in developing a robust and autonomous cardiac CT protocol, using deep learning approaches to extract contrast timing and cardiac phase timing directly from pulsed projections. In this paper, we present a new approach to generate large amounts of clinically realistic virtual data for training deep learning networks. We propose a five-dimensional cardiac model generated from 4D cardiac coronary CT angiography (CTA) data for synthetic contrast bolus dynamics and patient ECG profiles. We apply deep learning to segment seven heart compartments and simulate intravenous contrast propagation through each compartment to insert contrast bolus. Additional augmentation techniques by randomizing a bolus curve, patient ECG profile, acquisition timing, and patient motion are applied to increase the amount of data that can be generated. We demonstrate good performance of the deep learning segmentation network, examples of simulated bolus curves using a realistic protocol, and good correspondence between virtually generated projections and real projections from patient scans.
Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.
Computer simulation tools for X-ray CT are important for research efforts in developing reconstructionmethods, designing
new CT architectures, and improving X-ray source and detector technologies. In this paper, we propose a physics-based
modeling method for X-ray CT measurements with energy-integrating detectors. It accurately accounts for the dependence
characteristics on energy, depth and spatial location of the X-ray detection process, which is either ignored or over
simplified in most existing CT simulation methods. Compared with methods based on Monte Carlo simulations, it is
computationally much more efficient due to the use of a look-up table for optical collection efficiency. To model the CT
measurments, the proposed model considers five separate effects: energy- and location-dependent absorption of the incident
X-rays, conversion of the absorbed X-rays into the optical photons emitted by the scintillator, location-dependent
collection of the emitted optical photons, quantumefficiency of converting fromoptical photons to electrons, and electronic
noise. We evaluated the proposed method by comparing the noise levels in the reconstructed images from measured data
and simulations of a GE LightSpeed VCT system. Using the results of a 20 cm water phantom and a 35 cm polyethylene
(PE) disk at various X-ray tube voltages (kVp) and currents (mA), we demonstrated that the proposed method produces realistic CT simulations. The difference in noise standard deviation between measurements and simulations is approximately
2% for the water phantom and 10% for the PE phantom.
An analysis of a task based simulation study of coronary artery imaging via computed tomography (CT). Evaluation of standard filtered backprojection (FBP) reconstruction and motion compensated reconstruction of a moving cylindrical vessel that contains a hyper-intense lesion. Multiple conditions are simulated including: varying rest times of the vessel and varying motion orientations. A reference image with no motion was used for all comparisons. The images were segmented and quantitative metrics for accurate segmentation were compared. The motion compensated images have consistent error metrics with respect to the static case for all rest times. The FBP reconstructions were visually inferior for shorter rest times and had significantly inferior metrics. This is the first demonstration of equivalent performance for a given task when the rest times are reduced well below the temporal aperture of the acquisition, using either advanced algorithms or different data acquisition such as multi-source geometries.
Dual energy CT cardiac imaging is challenging due to cardiac motion and the resolution requirements of clinical
applications. In this paper we investigate dual energy CT imaging via fast kVp switching acquisitions of a novel
dynamic cardiac phantom. The described cardiac phantom is realistic in appearance with pneumatic motion control
driven by an ECG waveform.
In the reported experiments the phantom is driven off a 60 beats per minute simulated ECG waveform. The cardiac
phantom is inserted into a phantom torso cavity. A fast kVp switching axial step and shoot acquisition is detailed. The
axial scan time at each table position exceeds one heart cycle so as to enable retrospective gating. Gating is performed
as a mechanism to mitigate the resolution impact of heart motion.
Processing of fast kVp data is overviewed and the resulting kVp, material decomposed density, and monochromatic
reconstructions are presented. Imaging results are described in the context of potential clinical cardiac applications.
Recently there has been significant interest in dual energy CT imaging with several acquisition methods being
actively pursued. Here we investigate fast kVp switching where the kVp alternates between low and high kVp
every view. Fast kVp switching enables fine temporal registration, helical and axial acquisitions, and full field
of view. It also presents several processing challenges. The rise and fall of the kVp, which occurs during the
view integration period, is not instantaneous and complicates the measurement of the effective spectrum for low
and high kVp views. Further, if the detector digital acquisition system (DAS) and generator clocks are not fully
synchronous, jitter is introduced in the kVp waveform relative to the view period.
In this paper we develop a method for estimation of the resulting spectrum for low and high kVp views. The
method utilizes static kVp acquisitions of air with a small bowtie filter as a basis set. A fast kVp acquisition of
air with a small bowtie filter is performed and the effective kVp is estimated as a linear combination of the basis
vectors. The effectiveness of this method is demonstrated through the reconstruction of a water phantom acquired
with a fast kVp acquisition. The impact of jitter due to the generator and detector DAS clocks is explored via
simulation. The error is measured relative to spectrum variation and material decomposition accuracy.
Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The
potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of
portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped
to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space
is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and
studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to
incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue
localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect.
Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition
pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a
shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast,
projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide
accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual
energy CT.
In a 3rd generation CT system, a single source projects the entire field of view (FOV) onto a large detector opposite the source. In multi-source CT imaging, a multitude of sources sequentially project a part of the FOV on a much smaller detector. These sources may be distributed in both the trans-axial and axial directions in order to jointly cover the entire FOV. Scan data from multiple sources in the axial direction provide complementary information, which is not available in a conventional single-source CT system. In this work, an analytical 3D cone-beam reconstruction algorithm for multi-source CT is proposed. This approach has three distinctive features. First, multi-source data are re-binned transaxially to multiple offset third-generation datasets. Second, data points in sinograms from multiple source sets are either accepted or rejected for contribution to the backprojection of a given voxel. Third, instead of using a ramp filter, a Hilbert transform is combined with a parallel derivative to form the filtering mechanism. Phantom simulations are performed using a multi-source CT geometry and compared to conventional 3rd generation CT geometry. We show that multi-source CT can extend the axial scan coverage to 120mm without cone-beam artifacts, while a third-generation geometry results in compromised image quality at 60mm of axial coverage. Moreover, given that the cone-angle in the proposed geometry is limited to 7 degrees, there are no degrading effects such as the Heel effect and scattered radiation, unlike in a third-generation geometry with comparable coverage. An additional benefit is the uniform flux profile resulting in uniform image noise throughout the FOV and a uniform dose absorption profile.
We investigate how to achieve image reconstruction when the density map changes in time according to a known continuous motion field. We present here an ART iterative algorithm with a projection operator and
a backprojection operator that are matched to ensure fast convergence and are both computationally efficient. This algorithm applies to arbitrary continuous motion fields provided there is conservation of mass. Successful reconstruction results from computer-simulated data are presented. We also propose a method for evaluating the su±ciency of the data and predicting image quality of the reconstruction based on both the acquired angular range and the known motion field. We suggest a method of selecting an angular acquisition range to deal with periodic motion and give it as a hypothesis that this method ensures data sufficiency. Simulation results are given to substantiate this hypothesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.