Photon counting detectors in x-ray computed tomography (CT) improve the decomposition of the CT scans into different materials. This decomposition is however not straightforward to solve, both in terms of computation expense and Photon counting detectors in x-ray computed tomography (CT) are a major technological advancement that provides additional energy information, and improve the decomposition of the CT image into material images. This material decomposition problem is however a non-linear inverse problem that is difficult to solve, both in terms of computation expense and accuracy. The most accepted solution consists in defining an optimization problem based on a maximum likelihood (ML) estimate with Poisson statistics, which is a model-based approach very dependent on the considered forward model and the chosen optimization solver. This may make the material decomposition result noisy and slow to be computed. To incorporate data-driven enhancement to the ML estimate, we propose a deep learning post-processing technique. Our approach is based on convolutional residual blocks that mimic the updates of an iterative optimization process and consider the ML estimate as an input. Therefore, our architecture implicitly considers the physical models of the problem, and in consequence needs less training data and fewer parameters than other standard convolutional networks typically used in medical imaging. We have studied a simulation case of our deep learning post-processing, first on a set of 350 Shepp-Logan -based phantoms, and then in a 600 human numerical phantoms. Our approach has shown denoising enhancement over two different ray-wise decomposition methods: one based on a Newton’s method to solve the ML estimation, and one based on a linear least-squares approximation of the ML expression. We believe this new deep learning post-processing approach is a promising technique to denoise material-decomposed sinograms in photon-counting CT.
|