MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Deep-Learning Based CBCT Image Correction for CBCT-Guided Adaptive Radiation Therapy

J Harms1*, Y Lei1 , T Wang1 , R Zhang1 , J Zhou1 , X Dong1 , P Patel1 , K Higgins1 , X Tang2 , W Curran1 , T Liu1 , X Yang1 , (1) Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, (2) Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322

Presentations

(Thursday, 7/18/2019) 7:30 AM - 9:30 AM

Room: Stars at Night Ballroom 2-3

Purpose: Adaptive radiation therapy (ART), where dose delivery is changed daily based on patient setup and anatomy, has been a goal in most clinics since the introduction of cone-beam CT (CBCT) since its introduction to the radiation therapy workflow. However, the large scatter-to-primary ratio typical of CBCT leads to degraded image quality and the loss of quantitative information in CBCT images. In this work, we propose a deep-learning method to correct CBCT artifacts and restore HU levels to those typical of planning CT images.

Methods: The proposed method learns a mapping from a CBCT HU distribution to a planning CT HU distribution. The powerful cycle-consistent generative adversarial network (cycle-GAN) framework is used. During training, a generator is continually optimized to produce corrected CBCT (CCBCT) images, while a discriminator is optimized to identify the differences between a CCBCT image and a planning CT image. As these two are pitted against each other, convergence of the overall network optimization is improved. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. The proposed algorithm was evaluated using 24 brain patient datasets and 20 pelvis patient datasets.

Results: Overall, mean absolute error, peak signal-to-noise ratio, normalized cross-correlation and spatial non-uniformity were 18 HU, 37.18 dB, 0.99 and 0.05 for the proposed method, improvements of 45%, 12%, 1%, and 65%, respectively, over the CBCT image. The proposed method showed superior image quality as compared to a conventional scatter correction method, reducing noise and artifact severity.

Conclusion: The authors have developed a novel deep learning-based method to generate high-quality corrected CBCT images. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy.

Funding Support, Disclosures, and Conflict of Interest: This research is supported in part by the National Cancer Institute of the National Institutes of Health under Award Number R01CA215718 (XY).

Keywords

Not Applicable / None Entered.

Taxonomy

IM- Cone Beam CT: Machine learning, computer vision

Contact Email