Energy-resolving CT (ErCT) with a photon counting detector (PCD) is able to generate multi-energy data with high spatial resolution, and it can be used to improve contrast-to-noise ratio (CNR) of iodinated tissues and to reduce beam hardening artifacts. In addition, ErCT allows for generating virtual mono-energetic CT images with improved CNR. However, most of ErCT scanners are lab-built, but little used in clinical research. Deep learning based methods can help to generate ErCT images from energy-integrating CT (EiCT) images via convolution neural networks (CNNs) because of its capability in learning features of the EiCT images and ErCT images. Nevertheless, current CNNs usually generate ErCT images at one energy bin at a time, and there is large room for improvement, such as, generating multi-energy ErCT images at a time. Therefore, in this work, we investigate to leverage a deep generative model (IuGAN-ErCT) to simultaneously generate ErCT images at multiple energy bins from existing EiCT images. Specifically, a unified generative adversarial network (GAN) is employed. With a single generator, the generative network learns the latent correlation between the EiCT images and ErCT images to estimate ErCT images from EiCT images. Moreover, to maintain the value accuracy of different ErCT images, we introduced a fidelity loss function. In the experiment, 1384 abdomen and chest images collected from 22 patients were utilized to train the proposed IuGAN-ErCT method and 130 slices were used for test. Result shows that the IuGAN-ErCT method can generate more accurate ErCT images than the uGAN-ErCT method both in quantitative and qualitative evaluation.
Deep learning-based algorithms have been widely used in the low-dose CT imaging field, and have achieved promising results. However, most of these algorithms only consider the information of the desired CT image itself, ignoring the external information that can help improve the imaging performance. Therefore, in this study, we present a convolutional neural network for low-dose CT reconstruction with non-local texture learning (NTL-CNN) approach. Specifically, different from the traditional network in CT imaging, the presented NTL- CNN approach takes into consideration the non-local features within the adjacent slices in 3D CT images. Then, both low-dose target CT images and the non-local features feed into the residual network to produce desired high-quality CT images. Real patient datasets are used to evaluate the performance of the presented NTL-CNN. The corresponding experiment results demonstrate that the presented NTL-CNN approach can obtain better CT images compared with the competing approaches, in terms of noise-induced artifacts reduction and structure details preservation.
With the development of deep learning (DL), many deep learning (DL) based algorithms have been widely used in the low-dose CT imaging and achieved promising reconstruction performance. However, most DL-based algorithms need to pre-collect a large set of image pairs (low-dose/high-dose image pairs) and trains networks in a supervised end-to-end manner. Actually, it is not feasible in clinical to obtain such a large amount of paired training data, especially for high-dose ones. Therefore, in this work, we present a semi-supervised learned sinogram restoration network (SLSR-Net) for low-dose CT image reconstruction. The presented SLSR-Net consists of supervised sub-network and unsupervised sub-network. Specifically, different from the traditional supervised DL networks which only use low-dose/high-dose sinogram pairs, the presented SLSR-Net method is capable of feeding only a few supervised sinogram pairs and massive unsupervised low-dose sinograms into the network training procedure. The supervised pairs are used to capture critical features (i.e., noise distribution, and tissue characteristics) latent in a supervised way and the unsupervised sub-network efficiently learns these features using a conventional weighted least-squares model with a regularization term. Moreover, another contribution of the presented SLSR-Net method is to adaptively transfer learned feature distribution from supervised subnetwork with the paired sinograms to unsupervised sub-network with unlabeled low-dose sinograms to obtain high-fidelity sinogram with a Kullback-Leibler divergence. Finally, the filtered backprojection algorithm is used to reconstruct CT images from the obtained sinograms. Real patient datasets are used to evaluate the performance of the presented SLSR-Net method and the corresponding experimental results show that compared with the traditional supervised learning method, the presented SLSR-Net method achieves competitive performance in terms of noise reduction and structure preservation in low-dose CT imaging.
KEYWORDS: CT reconstruction, Computed tomography, Signal to noise ratio, 3D image processing, Tissues, 3D displays, 3D modeling, 3D image reconstruction, Visualization, Lithium
With an advanced photon counting detector, multi-energy computed tomography (MECT) can classify the photons according to the presetting thresholds and then acquire CT measurements from multiple energy bins. However, the number of the photons at one energy bin is limited compared with that in the conventional polychromatic spectrum. Therefore, the MECT images could suffer from noise-induced artifacts. To address this issue, in this work, we present a MECT reconstruction scheme which incorporates a low-rank tensor decomposition with spatial-spectral total variation (LRTD_SSTV) regularization. Additionally, the prior information from the whole energy, i.e., the average image from the MECT images, is introduced to the LRTDSSTV regularization to further improve reconstruction performance. This reconstruction scheme is termed as “LRTD_SSTVavi”. Experimental results with a digital phantom demonstrate that the presented method produces better MECT images and more accurate basis images compared with the RPCA, TDL and LRTD_STTV methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.