Purpose: Cone beam computed tomography (CBCT) is often used in patient positioning and anatomical change monitoring in clinics because of its less radiation exposure. It also has the potential for structure segmentation and dose calculation in adaptive radiation therapy (ART) since it has the most up-to-date anatomical information. However its inaccurate CT Hounsfield Unit (HU) numbers and great amount of artifacts affect the accuracy of its usage in ART. In our group, we are exploring the possibility of converting CBCT to CT-quality images using deep learning methods. In this work we propose to develop a learned primal dual reconstruction method to reconstruct CT-quality synthetic CT (sCT) images from CBCT projections.
Methods: The learned primal dual reconstruction is based on Chambolle-Pock algorithm, but the original primal operator and dual operator in Chambolle-Pock algorithm is replaced with parametrized operators where the parameters are learned from training data, resulting in a learned reconstruction operator. The HU values relationship between CBCT and CT images is learned at the same time through the learned reconstruction operator. We use 17 3D planning CT (pCT) images from real patients and their correspondingly simulated 3D CBCT projections for training, 1 for validation and 4 for testing. Similarity measures are used to evaluate sCT images reconstructed by the learned primal dual reconstruction algorithm compared with pCT images.
Results: Average mean absolute error and average root-mean-square error between sCT and pCT images is only 39.30HU and 78.83HU. Average peak signal-to-noise ratio and average structural similarity index between sCT and pCT images is 31.00 and 0.90.
Conclusion: The learned primal dual reconstruction can accurately and efficiently reconstruct CT-quality images from CBCT projections.
Not Applicable / None Entered.