Deep learning (DL) are being extensively investigated for low-dose computed tomography (CT). The success of DL lies in the availability of big data, learning the non-linear mapping of low-dose CT to target images based on convolutional neural networks. However, due to the commercial confidentiality of CT vendors, there are very few publicly raw projection data available to simulate paired training data, which greatly reduces the generalization and performance of the network. In the paper, we propose a dual-task learning network (DTNet) for low-dose CT simulation and denoising at arbitrary dose levels simultaneously. The DTNet can integrate low-lose CT simulation and denoising into a unified optimization framework by learning the joint distribution of low-dose CT and normal-dose CT data. Specifically, in the simulation task, we propose to train the simulation network by learning a mapping from normal-dose to low-dose at different levels, where the dose level can be continuously controlled by a noise factor. In the denoising task, we propose a multi-level low-dose CT learning strategy to train the denoising network, learning many-to-one mapping. The experimental results demonstrate the effectiveness of our proposed method in low-dose CT simulation and denoising at arbitrary dose levels.
Sparse view sampling is one of the effective ways to reduce radiation dose in CT imaging. However, artifacts and noise in sparse-view filtered back projection reconstructed CT images are obvious that should be removed effectively to maintain diagnostic accuracy. In this paper, we propose a novel sparse-view CT reconstruction framework, which integrates the projection-to-image and image-to-projection mappings to build a dual domain closed-loop learning network. For simplicity, the proposed framework is termed a closed-loop learning reconstruction network (CLRrcon). Specifically, the primal mapping (i.e., projection-to-image mapping) contains a projection domain network, a backward projection module, and an image domain network. The dual mapping (i.e., image-to-projection mapping) contains an image domain network and a forward projection module. All modules are trained simultaneously during the network training stage, and only the first mapping is used in the network inference stage. It should be noted that both the inference time and hardware requirements do not increase compared with traditional hybrid domain networks. Experiments on low-dose CT data demonstrate the proposed CLRecon model can obtain promising reconstruction results in terms of edge preservation, texture recovery, and reconstruction accuracy in the sparse-view CT reconstruction task.
Dynamic imaging (such as computed tomography (CT) perfusion, dynamic CT angiography, dynamic positron emission tomography, four-dimensional CT, etc.) is widely used in the clinic. The multiple-scan mechanism of dynamic imaging results in greatly increased radiation dose and prolonged acquisition time. To deal with these problems, low-mAs or sparse-view protocols are usually adopted, which lead to noisy or incomplete data for each frame. To obtain high-quality images from the corrupted data, a popular strategy is to incorporate the composite image that reconstructed using the full dataset into the iterative reconstruction procedure. Previous studies have tried to enforce each frame to approach the composite image in each iteration, which, however, introduces mixed temporal information into each frame. In this paper, we propose an average consistency (AC) model for dynamic CT image reconstruction. The core idea of AC is to enforce the average of all frames to approach the composite image in each iteration, which preserves image edges and noise characteristics while avoids the invasion of mixed temporal information. Experiment on a dynamic phantom and a patient for CT perfusion imaging shows that the proposed method obtains the best qualitative and quantitative results. We conclude that the AC model is a general framework and a superior way of using the composite image for dynamic CT reconstruction.
For a very long time, low-dose computed tomography (CT) imaging techniques have been performed by either preprocessing the projection data or regularizing the iterative reconstruction. The conventional filtered backprojection (FBP) algorithm is rarely studied. In this work, we show that the intermediate data during FBP possess some fascinating properties and can be readily processed to reduce the noise and artifacts. The FBP algorithm can be technically decomposed into three steps: filtering, view-by-view backprojection and summing. The data after view-by-view backprojection is naturally a tensor, which is supposed to contain useful information for processing in higher dimensionality. We here introduce a sorting operation to the tensor along the angular direction based on the pixel intensity. The sorting for each point in the image plane is independent. Through the sorting operation, the structures of the object can be explicitly encoded into the tensor data and the artifacts can be automatically driven into the top and bottom slices of the tensor. The sorted tensor also provides high dimensional information and good low-rank properties. Therefore, any advanced processing methods can be applied. In the experiments, we demonstrate that under the proposed scheme, even the Gaussian smoothing can be used to remove the streaking artifacts in the ultra-low dose case, with nearly no compromising of the image resolution. It is noted that the scheme presented in this paper is a heuristic idea for developing new algorithms of low-dose CT imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.