MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Robustness of Conditional Generative Adversarial Networks in Generating Head and Neck Synthetic CT Images for MR-Only Radiotherapy Treatment Planning

P Klages*, I Benslimane , S Riyahi , J Jiang , M Hunt , J Deasy , H Veeraraghavan , N Tyagi , Memorial Sloan Kettering Cancer Center, New York, NY

Presentations

(Tuesday, 7/16/2019) 7:30 AM - 9:30 AM

Room: Stars at Night Ballroom 2-3

Purpose: To generate dosimetrically accurate sCT images for MR-only planning of Head and Neck (HN) Cancer RT planning. Our aim was to study the robustness of conditional generative adversarial networks (conditional GANs) to previously unseen artifacts and features in MRI.

Methods: Twenty paired CT and dual FFE mDixon in-phase MR image sets from HN cancer patients treated at our institution were selected to test and train three different conditional GANs. The paired images were deformably registered using Plastimatch and were rescaled to have isotropic spacing. The dataset was split into two groups of ten patients, with the training set using patients with minimal imaging artifacts, to evaluate how the networks would handle previously unseen artifacts. We developed 2.5D versions of the Pix2Pix and Cycle GAN conditional GANs. We also developed and implemented a novel third network, a hybrid of the Pix2Pix and Cycle GAN networks, and voting-based combination methods for overlapping HU estimations. Accuracy and robustness were tested by evaluating mean absolute error (MAE) between the CT and sCT Hounsfield Units (HU), and by evaluating the percent dose differences for clinically relevant structures between the CTs and sCTs.

Results: Comparisons between CT and sCT images yielded MAEs of 92.4±13.6, 94.0±13.6, and 100.7±14.6 HU for the Pix2Pix, Hybrid, and Cycle GAN networks, respectively. Dosimetric results showed absolute percent dose difference agreement of <=2% for all structures of interest for the networks, including structures with large imaging artifacts.

Conclusion: This work is the first implementation of deep learning techniques for a full set of HN cancer cases, and furthermore was done without any exclusion criteria for implants, artifacts, or unusual anatomy in the testing set. Our preliminary evaluation indicates that for well-paired image sets, the Pix2Pix network has the best MAE and dosimetric accuracy as is expected with the network constraints.

Funding Support, Disclosures, and Conflict of Interest: This research was supported by Philips Healthcare under a Master Research Agreement and partially supported by the NIH/NCI Cancer Center Support Grant/Core Grant (P30 CA008748).

Keywords

Radiation Therapy, MRI, Image Processing

Taxonomy

IM/TH- MRI in Radiation Therapy: MRI for treatment planning

Contact Email