MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Generating Synthetic CTs From Magnetic Resonance Images Using Generative Adversarial Networks

H Emami Gohari1*, S P. Nejad-Davarani2 , M Dong3 , C Glide-Hurst4 , (1) Department of Computer Science, Wayne State University, Detroit, MI (2) Henry Ford Health System, Detroit, MI (3) Department of Computer Science, Wayne State University,Detroit, MI (4) Henry Ford Health System, Detroit, MI

Presentations

(Sunday, 7/29/2018) 1:00 PM - 1:55 PM

Room: Karl Dean Ballroom C

Purpose: While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This work describes a novel method for generating brain synCTs based on generative adversarial networks (GANs), a deep learning model that trains two competing networks simultaneously, and compares GAN to a deep convolutional neural network (CNN).

Methods: Post-Gadolinium T1-Weighted and CT-SIM images from fifteen brain cancer patients were retrospectively analyzed. The GAN model was developed to generate synCTs using T1-weighted MRI images as the input using a residual network (ResNet) as the generator. The discriminator is a CNN with five convolutional layers that classified the input image as real or synthetic. Five-fold cross validation was performed to validate our model. GAN performance was compared to CNN based on mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics between the synCT and CT images.

Results: GAN testing took ~11 hours with synCT generation time of 5.7±0.6 seconds. For GAN, MAEs between synCT and CT-SIM were 89.3±10.3 HU and 41.9±8.6 HU across the entire FOV and tissues, respectively. However, MAE in the bone and air was, on average, ~240-255 HU. By comparison, the CNN model had a higher full FOV MAE of 102.4±11.1 HU. For GAN, the mean PSNR was 26.6±1.2 and SSIM was 0.83±0.03. GAN synCTs preserved details better than CNN, and regions of abnormal anatomy were well represented on GAN synCTs.

Conclusion: We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain while preserving features necessary for high precision applications.

Funding Support, Disclosures, and Conflict of Interest: The submitting institution holds research agreements with Philips Healthcare, ViewRay, Inc., and Modus Medical. Research supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA204189.

Keywords

Not Applicable / None Entered.

Taxonomy

Not Applicable / None Entered.

Contact Email