MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Intelligent Synthetic CT Generation Based On CBCT Images Via Unsupervised Deep Learning

L Chen*, X Liang , C Shen , S Jiang , J Wang , UT Southwestern Medical Center, Dallas, TX

Presentations

(Monday, 7/15/2019) 4:30 PM - 5:30 PM

Room: Exhibit Hall | Forum 2

Purpose: Most up-to-date computed tomography (CT) is essential in adaptive radiation therapy (ART) to account for potential patients’ anatomy changes. However, CT-on-rails are not readily available in every clinic. Cone-beam CT (CBCT) scanned at a daily or weekly basis for patient positioning is commonly used in clinics, while its inaccuracy of Hounsfield Unit (HU) value prevents its straightforward applications in dose calculation and treatment planning. Hence, synthesizing a CT-like image from the on-treatment CBCT, which has the same anatomical structure as CBCT and accurate HU values is warranted in ART.

Methods: We proposed an unsupervised deep U-net based approach to generate synthetic CT (sCT) based on on-treatment CBCT and planning CT (pCT). Unsupervised learning was desired since the exactly matched CBCT and CT are hardly available for supervised learning even they are taken a few minutes apart. In the proposed model, CBCT and pCT as two inputs provide anatomical structure and accurate HU information, respectively. The training objective function of this model aims to minimize: 1) the contextual loss between sCT and CBCT to maintain content and structure of CBCT in sCT and 2) the perceptual loss between sCT and pCT to achieve pCT-like image quality in sCT, simultaneously. CBCT and pCT images of 13 patients (1040 slices) were utilized for training and validating the designed model, and another four independent patient cases (320 slices) were used for testing the model effectiveness.

Results: We quantitatively compare the resulting sCT with the original CBCT using deformed pCT images as reference. The proposed model improved structure-similarity-index (SSIM) by ~10% and peak-signal-to-noise-ratio (PSNR) by ~4dB averagely. Additionally, error in HU value was decreased by ~42%.

Conclusion: We have demonstrated the effectiveness of the proposed unsupervised learning model in synthesizing CT-quality images using CBCT and pCT. It potentially permits advanced applications of CBCT such as adaptive treatment planning.

Funding Support, Disclosures, and Conflict of Interest: US National Institutes of Health (R01 EB020366)

Keywords

Cone-beam CT, Image-guided Therapy, Image Processing

Taxonomy

IM- Cone Beam CT: Machine learning, computer vision

Contact Email