This study aims to simplify radiation therapy treatment planning by proposing an MRI-to-CT transformer-based denoising diffusion probabilistic model (CT-DDPM) to generate high-quality synthetic computed tomography (sCT) from magnetic resonance imaging (MRI). The goal is to reduce patient radiation dose and setup uncertainty by eliminating the need for CT simulation and image registration during treatment planning. The CT-DDPM utilizes a diffusion process with a shifted-window transformer network to transform MRI into sCT. The model comprises two processes: a forward process, adding Gaussian noise to real CT scans to create noisy images, and a reverse process, denoising the noisy CT scans using a Vshaped network (Vnet) conditioned on the corresponding MRI. With an optimally trained Swin-Vnet, the reverse process generates sCT scans matching the MRI anatomy. The method is evaluated using mean absolute error (MAE) of Hounsfield unit (HU), peak signal-to-noise ratio (PSNR), multi-scale Structural Similarity index (MS-SSIM) and normalized cross-correlation (NCC) between ground truth CTs and sCTs. For the brain dataset, CT-DDPM demonstrated state-of-the-art quantitative results, exhibiting an MAE of 45.210±3.807 HU, a PSNR of 26.753±0.861 dB, an SSIM of 0.964±0.005, and an NCC of 0.981±0.004. In the context of the prostate dataset, the model also showed impressive performance with an MAE of 55.492±8.281 HU, a PSNR of 28.912±2.591 dB, an SSIM of 0.894±0.092, and an NCC of 0.945±0.054. Across both datasets, CT-DDPM significantly outperformed competing networks in most metrics, a finding corroborated by the student’s paired t-test. The source code is available: https://github.com/shaoyanpan/Synthetic-CT-generation-from- MRI-using-3D-transformer-based-denoising-diffusion-model
Dual-energy cone-beam CT (DECBCT) has great potential for quantitative imaging tasks in CBCT-guided radiation therapy. However, the lack of a practical single-scan solution in data acquisition impedes the practice of CBCT-based adaptive radiotherapy (ART). In this work, we propose an efficient way to achieve DECBCT using the primary beam splitting in a single short scan or half-fan scan. To restore complete dual-energy sinograms sufficient for analytical image reconstruction, the conditional diffusion model is introduced to convert the acquired spectral-mixed sinogram to the complete dual-energy sinograms via a data refinement strategy. The proposed method is compared with the other two diffusion model-based methods and the preliminary results demonstrate the feasibility of the proposed method in quantitative imaging tasks, making it a promising solution to CBCT-based dose calculation and replanning.
Typical radiation therapy for head-and-neck cancer patients lasts for more than a month. Anatomical variations often occur along the treatment course due to the tumor shrinkage and weight loss, particularly for head and neck (HN) cancer patients. To maintain the accuracy of radiotherapy beam delivery, weekly quality assurance (QA) CT is sometime acquired to monitor patients’ anatomical changes, and re-plan the treatment if needed. However, the re-plan is a labor intensive and time-consuming process, thus, decisions of re-plan are always made cautiously. In this study, we aim to develop a deep learning-based method for automated segmentation of multi-organ from CT head and neck (HN) images to rapidly evaluate the anatomical variations. Our proposed method, named detecting and boosting network, consists of one pre-trained fully convolutional one stage objection detector (FCOS) and two learnable subnetworks, i.e., hierarchical block and mask head. FCOS is used to extract informative features from CT and locate the volume-of-interest (VOIs) of multiple organs. Hierarchical block is used to enhance the feature contrast around organ boundary and thus improve the ability of organ classification. Mask head then segment organ from the refined feature map within the VOIs. We conducted a five-fold cross-validation on 35 patients’ cases who have multiple weekly CT scans (over 100 QACTs) during their radiotherapy. The 11 organs were segmented and compared with manual contours using several segmentation measurements. The mean Dice similarity coefficient (DSC) values of 0.82, 0.82, and 0.81 were achieved along the treatment course for all the organs. These results demonstrate the feasibility and efficacy of our proposed method for multi-OAR segmentation from HN CT, which can be used for rapid evaluate the anatomical variations in HN radiation therapy.
Radiation treatment for head-and-neck (HN) cancers requires accurate treatment planning based on 3D patient models derived from CT images. In clinical practice, the treatment volumes and organs-at-risk (OARs) are manually contoured by experienced physicians. This tedious and time-consuming procedure limits clinical workflow and resources. In this work, we propose to use a 3D Faster R-CNN to automatically detect the location of head and neck organs, then apply a U-Net to segment the multi-organ contours, called U-RCNN. The mean Dice similarity coefficient (DSC) of esophagus, larynx, mandible, oral cavity, left parotid, right parotid, pharynx and spinal cord were ranging from 79% to 89%, which demonstrated the segmentation accuracy of the proposed U-RCNN method. This segmentation technique could be a useful tool to facilitate routine clinical workflow in H&N radiotherapy.
In this study, we propose a synthetic CT (sCT) aided MRI-CT deformable image registration for head and neck radiotherapy. An image synthesis network, cycle consistent generative adversarial network (CycleGAN), was first trained using 25 pre-aligned CT-MRI image pairs. Using the MR head and neck images, the trained CycleGAN then predicts sCT images, which were used as MRI’s surrogate in MRI-CT registration. Demons registration algorithm was used to perform the sCT-CT registration on 5 separate datasets. For comparison, the original MRI and CT images were registered using mutual information as similarity metric. Our results showed that the target registration errors after registration were on average 1.31 mm and 1.02 mm for MRI-CT and sCT-CT registration, respectively. The mean normalized cross correlation between the sCT and CT after registration was 0.97, indicating that the proposed method is a viable way to perform MRI-CT image registration for head neck patients.
We propose a method to automatically segment multiple organs at risk (OARs) from routinely-acquired thorax CT images using generative adversarial network (GAN). Multi-label U-Net was introduced in generator to enable end-to-end segmentation. Esophagus and spinal cord location information were used to train the GAN in specific regions of interest (ROI). The probability maps of new CT thorax multi-organ were generated by the well-trained network and fused to reconstruct the final contour. This proposed algorithm was evaluated using 20 patients' data with thorax CT images and manual contours. The mean Dice similarity coefficient (DSC) for esophagus, heart, left lung, right lung and spinal cord was 0.73±0.04, 0.85±0.02, 0.96±0.01, 0.97±0.02 and 0.88±0.03. This novel deep-learning-based approach with the GAN strategy can automatically and accurately segment multiple OARs in thorax CT images, which could be a useful tool to improve the efficiency of the lung radiotherapy treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.