Room: Exhibit Hall | Forum 2
Purpose: Recent developments in deep learning, including conditional generative adversarial networks (cGAN) have enabled translation of medical images from one modality to another. Successful results for generating synthetic CT (sCT) show the potential of these techniques in radiotherapy planning including MR-only treatment planning. The aim of this study was to investigate how closely do sCTs resemble planning CTs, such that they can be used for large scale radiomic analysis to supplant missing images or artifacts.
Methods: A Pix2Pix cGAN was built to generate sCT images from T1 gradient-echo in-phase MR. Our model was implemented with improvements to the original Pix2Pix including, the use of three orthogonal reformatted views and epoch-specific data augmentation. The model was trained on 10 patients and then tested on 8 patients to generate sCT images. Multiple radiation therapy organs used for treatment planning from the original CT were evaluated using 61 radiomics features on sCT and CT using open-source CERR software. The concordance of these features was compared using Bland and Altman analysis. Only structures with volume exceeding 3cc were evaluated.
Results: The mean HU values within all structures were significantly lower on sCT (p=0.007). The Bland-Altman analysis showed a mean difference of 144 HU (95% CI 78-211HU). All intensity-histogram metrics had a critical difference higher than 100% while 37% of the texture-based features have a critical difference < 100%.
Conclusion: Our results demonstrate that sCT images produced using the Pix2Pix method as we did cannot fully recover the radiomics properties. However, the results produced by a model without any optimization for the radiomics task highlight the feasibility to produce synthetic images for radiomics analysis. The investigation of models taking in account multiple contrast sources MRs and specific textures-losses functions should allow a better recovering of the radiomics properties.