Room: ePoster Forums
Purpose: Generative adversarial networks (GANs) is one of the most intelligent deep learning methods to train a generator supervised by a discriminator to generate the verisimilar image. This study inter-translated different MRI imaging modalities to compare and synthesize the optimal configurations of tumor target and organs at risk using GANs for MRI-only radiation therapy planning.
Methods: Four sets of MRI imaging modalities (T1-weighted (T1), T1 with gadolinium enhancing contrast (T1c), T2-weighted (T2) and FLAIR) per patient from the BRATS database were used to train, validate and test. There are total 210 patient MRI images with high-grade glioma in database, in which 70% was for training, 20% was for validation and 10% was for test. The original formulation of conditional GANs for image-to-image translation (Pix2pix) implemented in Tensorflow was adapted. The 2-D image and its ground truth were processed and joined together to meet the input requirement of Pix2pix. The output was quantitatively assessed by metrics such as the mean squared error (MSE), the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) with respect to the ground truth.
Results: The preliminary study suggests that the different MRI images could be translated interactively to obtain the synthetic target region despite in different distinct grey levels. There are differences existing in outer contour of target among different MRI imaging modalities. The translation of T1 images into T2 images and vice versa gave the most approximate results.
Conclusion: Deep learning-based generative adversarial networks is able to generate convincing synthetic target images by MRI image-to-image translating. The accurate target segmentation could refer to the synthetic target region to obtain more additional information which maybe is not accessible in the individual image.
Not Applicable / None Entered.