Owing to poor characterization of implant and adjacent human tissues, the presence of metal implants has been shown to be a risk factor for clinical results for proton therapy. In this project we have developed a way of characterizing implant and human materials in terms of water-equivalent thicknesses (WET) and relative stopping power (RSP) using a novel proton counting detector. We tracked each proton using a fast spectral imaging camera AdvaPIX-TPX3 which operated in energy mode measures collected energy per-voxel to derive the deposited energy along the particle track across the voxelated sensor. We considered three scenarios: sampling of WET of a CIRS M701 Adult Phantom (CMAP) at different locations; measurements of energy perturbations in the CMAP implanted with metal rods; sampling of WET of a more complex spine phantom. WET and RSP information were extracted from energy spectra at position along the central axis by using the shift in the most probable energy (MPE) from the reference energy (either initial incident energy or energy without a metal implant). Measurements were compared to TOPAS simulation results. Measured WET of the CMAP ranged from 18.63 to 25.23 cm depending on the location of the sampling which agreed with TOPAS simulation results within 1.6%. The RSPs of metals from CMAP perturbation measurements were determined as 1.97, 2.98, and 5.44 for Al, Ti and CoCr, respectively, which agreed with TOPAS within 2.3%. RSPs for material composition of a more complex spine phantom yielded 1.096, 1.309 and 1.001 for Acrylic, PEEK and PVC, respectively. In summary, this work has shown a method to accurately characterize RSPs of metal and human materials of CMAP implanted with metals and a complex spine phantom. Using the data obtained by the proposed method, it may be possible to validate RSP maps provided by conventional photon computed tomography techniques. Owing to poor characterization of implant and adjacent human tissues, the presence of metal implants has been shown to be a risk factor for clinical results for proton therapy. In this project we have developed a way of characterizing implant and human materials in terms of water-equivalent thicknesses (WET) and relative stopping power (RSP) using a novel proton counting detector. We tracked each proton using a fast spectral imaging camera AdvaPIX-TPX3 which operated in energy mode measures collected energy per-voxel to derive the deposited energy along the particle track across the voxelated sensor. We considered three scenarios: sampling of WET of a CIRS M701 Adult Phantom (CMAP) at different locations; measurements of energy perturbations in the CMAP implanted with metal rods; sampling of WET of a more complex spine phantom. WET and RSP information were extracted from energy spectra at position along the central axis by using the shift in the most probable energy (MPE) from the reference energy (either initial incident energy or energy without a metal implant). Measurements were compared to TOPAS simulation results. Measured WET of the CMAP ranged from 18.63 to 25.23 cm depending on the location of the sampling which agreed with TOPAS simulation results within 1.6%. The RSPs of metals from CMAP perturbation measurements were determined as 1.97, 2.98, and 5.44 for Al, Ti and CoCr, respectively, which agreed with TOPAS within 2.3%. RSPs for material composition of a more complex spine phantom yielded 1.096, 1.309 and 1.001 for Acrylic, PEEK and PVC, respectively. In summary, this work has shown a method to accurately characterize RSPs of metal and human materials of CMAP implanted with metals and a complex spine phantom. Using the data obtained by the proposed method, it may be possible to validate RSP maps provided by conventional photon computed tomography techniques.
Proton radiation therapy has shown highly conformal distribution of prescribed dose in target with outstanding normal tissue sparing stemming from its steep dose gradient at the distal end of the beam. However, the uncertainty in everyday patient setup can lead to a discrepancy between treatment dose distribution and the planning dose distribution. Conebeam CT (CBCT) can be acquired daily before treatment to evaluate such inter-fraction setup error, while a further evaluation on resulted dose distribution error is currently not available. In this study, we developed a novel deep-learning based method to predict the relative stopping power maps from daily CBCT images to allow for online dose calculation in a step towards adaptive proton radiation therapy. 20 head-and-neck patients with CT and CBCT images are included for training and testing. Our CBCT RSP results were evaluated with RSP maps created from CT images as the ground truth. Among all the 20 patients, the averaged mean absolute error between CT-based and CBCT-based RSP was 0.04±0.02, the averaged mean error was -0.01±0.03 and the averaged normalized correlation coefficient was 0.97±0.01. The proposed method provides sufficiently accurate RSP map generation from CBCT images, possibly allowing for CBCT-guided adaptive treatment planning for proton radiation therapy.
Radiation treatment for head-and-neck (HN) cancers requires accurate treatment planning based on 3D patient models derived from CT images. In clinical practice, the treatment volumes and organs-at-risk (OARs) are manually contoured by experienced physicians. This tedious and time-consuming procedure limits clinical workflow and resources. In this work, we propose to use a 3D Faster R-CNN to automatically detect the location of head and neck organs, then apply a U-Net to segment the multi-organ contours, called U-RCNN. The mean Dice similarity coefficient (DSC) of esophagus, larynx, mandible, oral cavity, left parotid, right parotid, pharynx and spinal cord were ranging from 79% to 89%, which demonstrated the segmentation accuracy of the proposed U-RCNN method. This segmentation technique could be a useful tool to facilitate routine clinical workflow in H&N radiotherapy.
By exploiting the energy dependence of photoelectric and Compton interactions, dual-energy CT (DECT) can be used to derive a number of parameters based on physical properties, such as relative stopping power map (RSPM). The accuracy of dual-energy CT (DECT)-derived parametric maps relies on image noise levels and the severity of artifacts. Suboptimal image quality may degrade the accuracy of physics-based mapping techniques and affect subsequent processing for clinical applications. In this study, we propose a deep-learning-based method to accurately generate relative stopping power map (RSPM) based on the virtual monoenergetic images as an alternative to physics-based dual-energy approaches. For the training target of our deep-learning model, we manually segmented head-and-neck DECT images into brain, bone, fat, soft-tissue, lung and air, and then assigned different RSP values into the corresponding tissue types to generate a reference RSPM. We proposed to integrate a residual block concept into a cycle-consistent generative adversarial network (cycleGAN) framework to learn the nonlinear mapping between DECT 70keV/140keV monoenergetic image pairs and reference RSPM. We evaluated the proposed method with 18 head-and-neck cancer patients. Mean absolute error (MAE) and mean error (ME) were used to quantify the differences between the generated and reference RSPM. The average MAE between generated and reference RSPM was 3.1±0.4 % and the average ME was 1.5±0.5 % for all patients. Compared to the physics-based method, the proposed method could significantly improve RSPM accuracy and had comparable computational efficiency after training.
KEYWORDS: Computed tomography, Associative arrays, 3D image processing, Prostate, 3D modeling, Process modeling, Reconstruction algorithms, Ultrasonography, High dynamic range imaging, Visualization
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided prostate high dose rate (HDR) brachytherapy. In this paper, we propose a workflow for multineedle detection in 3D ultrasound (US) images with corresponding CT images used for supervision. Since the CT images do not exactly match US images, we propose a novel sparse model, dubbed Bidirectional Convolutional Sparse Coding (BiCSC), to tackle this weakly supervised problem. BiCSC aims to extract the latent features from US and CT and then formulate a relationship between them where the learned features from US yield to the features from CT. Resultant images allow for clear visualization of the needle while reducing image noise and artifacts. On the reconstructed US images, a clustering algorithm is employed to find the cluster centers which correspond to the true needle position. Finally, the random sample consensus algorithm (RANSAC) is used to model a needle per ROI. Experiments are conducted on prostate image datasets from 10 patients. Visualization and quantitative results show the efficacy of our proposed workflow. This learning-based technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy, and further enhance the clinical workflow for prostate HDR brachytherapy.
KEYWORDS: Image segmentation, Computed tomography, Medical imaging, Chemical vapor deposition, Angiography, Heart, 3D image processing, Magnetic resonance imaging, Prostate, Arteries
Cardiovascular diseases (CVD) are the leading cause of disability and death worldwide. Many parameters based on left ventricular myocardium (LVM), including left ventricular mass, the left ventricular volume, and the ejection fraction (EF) are widely used for disease diagnosis and prognosis prediction. To investigate the relationship between parameters derived from the LVM and various heart diseases, it is crucial to segment the LVM in a fast and reproducible way. However, different diseases can affect the structure of the LVM, which increases the complexity of the already time-consuming manual segmentation work. In this work, we propose to use a 3D deep attention U-Net method to segment the LVM contour for cardiac CT images automatically. We used 50 patients’ cardiac CT images to test the proposed method. The Dice similarity coefficient (DSC), sensitivity, specificity, and mean surface distance (MSD) were 87% ± 5%, 87% ± 4%, 92% ± 3% and 0.68 ± 0.15 mm, which demonstrated the detection and segmentation accuracy of the proposed method.
Gated myocardial perfusion SPECT (MPS) is widely used to assess the left ventricular (LV) function. Its performance relies on the accuracy of segmentation on LV cavity. We propose a novel machine-learningbased method to automatically segment LV cavity and measure its volume in gated MPS imaging. To perform end-to-end segmentation, a multi-label V-Net is used to build the network architecture. The network segments a probability map for each heart contour (epicardium, endocardium and myocardium). To evaluate the accuracy of segmentation, we retrospectively investigated gated MPS images from 32 patients. The LV cavity was automatically segmented by the proposed method, and compared to manually outlined contours, which were taken as the ground truth. The derived LV cavity volumes were extracted from both ground truth and results of proposed method for comparison and evaluation. The mean DSC, sensitivity and specificity of the contours delineated by our method are all above 0.9 among all 32 patients and 8 phases. The correlation coefficient of the LV cavity volume between ground truth and results produced by the proposed method is 0.910±0.061, and the mean relative error of LV cavity volume among all patients and all phases is - 1.09±3.66 %. These results indicate that the proposed method accurately quantifies the changes in LV cavity volume during the cardiac cycle. It also demonstrates the potential of our learning-based segmentation methods in gated MPS imaging for clinical use.
We propose a method to generate patient-specific pseudo CT (pCT) from routinely-acquired MRI based on semantic information-based random forest and auto-context refinement. Auto-context model with patch-based anatomical features are integrated into classification forest to generate and improve semantic information. The concatenate of semantic information with anatomical features are then used to train a series of regression forests based on auto-context model. The pCT of new arrival MRI is generated by extracting anatomical features and feeding them into the well-trained classification and regression forests for pCT prediction. This proposed algorithm was evaluated using 11 patients’ data with brain MR and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) are 57.45±8.45 HU, 28.33±1.68 dB, and 0.97±0.01. The Dice similarity coefficient (DSC) for air, soft-tissue and bone are 97.79±0.76%, 93.32±2.35% and 84.49±5.50%, respectively. We have developed a novel machine-learning-based method to generate patient-specific pCT from routine anatomical MRI for MRI-only radiotherapy treatment planning. This pseudo CT generation technique could be a useful tool for MRI-based radiation treatment planning and MRI-based PET attenuation correction of PET/MRI scanner.
We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.