Click here to


Are you sure ?

Yes, do it No, cancel

Improving CBCT Quality to CT Level Using Deep-Learning Method for Adaptive Radiation Therapy

Y Zhang1,2*, N Yue1 , M Su2 , Y Ding3 , B Liu1 , Y Zhang1 , Y Zhou4 , K Nie1 , (1) Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, (2) Department of Radiological Science, University of California Irvine, Irvine, CA, (3) Hubei Cancer Hospital, Wuhan,(4) Fudan University Zhongshan Hospital, Shanghai


(Sunday, 7/14/2019) 4:00 PM - 5:00 PM

Room: 225BCD

Purpose: To develop a deep-learning based approach to improve CBCT image quality for extended clinical applications.

Methods: Data from 150 pelvic patients with paired planning CT and CBCT were used in this study. All CT images were collected in GE LightSpeed VCT scanner and CBCT images were acquired with Varian Truebeam. An unsupervised deep-learning method, generative adversarial network (GAN) model was used to learn translational functions from a source domain (CBCT) to a target domain (deep-learning based CBCT or dCBCT). Image pre-processing including denoising and suppressing non-uniformity by a non-local means method. The planning CT was then deformed to CBCT using Velocity and served as the ground-truth CT. A total of 10800 slices were used for training and validating the GAN-based model, while 1200 slices of CT and CBCT were used for testing. The obtained deep-learning based CBCT were compared to ground-truth CT in terms of mean absolute error (MAE) in Hounsfield Unit (HU) and peak signal-to-noise ratio (PSNR).

Results: The deep-learning correction algorithm was evaluated using the 10-fold cross-validation method. In all 1200 testing slices the mean MAE improved from 26.1±9.9 HU (CBCT vs. CT) to 8.1±1.3 HU (dCBCT vs. CT). The PSNR also improved from 16.7±10.2 (CBCT vs. CT) to 24.0±7.5 (dCBCT vs. CT). The network code was written in Python 3.5 and experiments were performed on a GPU-optimized workstation with a single NVIDIA GeForce GTX Titan X (12GB, Maxwell architecture). Once the model was trained, it took 11-12ms to process one slice, and could generate a 3D-volume of dCBCT (80 slices) in less than a second.

Conclusion: The deep-learning based algorithm as presented here is promising to improve CBCT image quality to close to CT level in a timely fashion, thus provides a possibility for online CBCT-based adaptive radiotherapy.


Cone-beam CT, Image-guided Therapy


IM/TH- Image Analysis (Single modality or Multi-modality): Computer/machine vision

Contact Email