Most existing fully automatic or semi-automatic medical imaging segmentation methods start from reconstructed images. However, a framework for joint segmentation and image reconstruction can be beneficial because both tasks can be mutually dependent. Better segmentation can improve image reconstruction and vice-versa. We propose to perform joint PET image reconstruction and fully automatic PET image segmentation, using the CT image from a PET-CT scanner as a given input. Within a unified framework, our proposed method generates a PET image and a segmentation mask utilizing two connected trained networks: 1) a network dedicated to denoising the PET image with boundary information from the segmentation network. While reconstructing the PET image, the algorithm exploits the denoised image recovered from the trained network. 2) a segmentation network dedicated to estimating the lesion and background (e.g., liver) masks using PET/CT information. A boundary indicator image is generated based on the gradients of segmentation masks. We simulated extremely low-count PET, typical for Y-90 imaging, where traditional segmentation and reconstruction methods tend to perform poorly. For PET reconstruction, proposed method using true boundary improves CNR (RMSE) by 28.9 % (49.1%) and 16.8 % (13.2%) compared to EM and proposed method without using boundary. For multi-modal segmentation, our proposed method improved global Dice score in tumor by 70.6% compared to our proposed segmentation framework using only CT information.
|