Click here to


Are you sure ?

Yes, do it No, cancel

Unparied Cone-Beam CT to CT Translation Using Cycle-Consistent Adversarial Networks

X Liang*, L Chen , D Nguyen , J Wang , S Jiang , UT Southwestern Medical Center, Dallas, TX


(Thursday, 8/2/2018) 1:00 PM - 3:00 PM

Room: Room 202

Purpose: Cone-beam computed tomography (CBCT) is widely used for image guidance in radiation therapy. However, when we use CBCT images for dose calculation and structures segmentation we have to be careful in dealing with the inaccurate CT HU (Hounsfield units) numbers and other artifacts. In our group we are exploring the possibility of converting CBCT to CT images using deep learning. One challenge we face is the lack of precisely paired CBCT and CT images for supervised training. In this work we propose to use Cycle-GAN to correct the HU value and reduce some artifacts in CBCT images to generate high quality synthesized CT images without using precisely paired CBCT and CT images for training.

Methods: We train the 2D Cycle-GAN model to generate synthesized CT from CBCT with unpaired CT-CBCT images. We use 12 H&N patients for training, 1 patient for validation and 4 patients for testing. Each patient has 80 slices of CT and CBCT images respectively. Image modality translation is evaluated by comparing synthesized CT, deformed real CT and CBCT images.

Results: Using different similarity measurements to evaluate synthesized CT and CBCT against deformed real CT, synthesized CT images have higher structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and lower mean absolute error (MAE), mean square error (MSE). Graphical methods are also used to compare probability distributions of synthesized CT and CBCT against real CT which shows synthesized CT is much more similar to CT.

Conclusion: A deep learning model is developed to convert CBCT to CT trained by unpaired images.


Not Applicable / None Entered.


Not Applicable / None Entered.

Contact Email