MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Progressively Grown GAN with Learned Fusion Operation for Hetero-Modal Synthesis of MRI Sequences

D Gourdeau1*, S Duchesne2, L Archambault3, (1) Universite Laval, Quebec, QC, CA, (2) Centre de recherche CERVO, Quebec, QC, CA (3) CHUQ Pavillon Hotel-Dieu de Quebec, Quebec, QC, CA

Presentations

(Sunday, 7/12/2020) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Room: Track 2

Purpose: Magnetic resonance imaging (MRI) is the prime imaging method when imaging for soft tissues. Multiple pulse sequences can be used to obtain different contrasts. However, missing pulse sequences and imaging artifacts are a problem for data analysis pipelines that depend on the presence of specific sequences. Hence, selective synthesis of a desired sequence and automatic completion of these heterogeneous datasets are desirable.

Methods: Traditional multi-modal image synthesis methods are able to create high-quality images, but are limited in a practical setting because they can’t handle missing inputs. In this work, we present a hetero-modal image synthesis approach that can synthesize any modality when given only a subset of available modalities. To do so, each input modality is encoded into a 3D modality-specific multi-resolution representation. Previous works have addressed the fusion of these representations using arithmetical operations like mean, maximum or variance. One downside of these fusion methods is that they require at least two input modalities to define the variance. In this work, we propose to fuse these representation using a 3D attention network that learns to optimally combine these representations to synthesize the desired output. We incorporate recent advances in generative adversarial networks (GANs) like the progressive growing of GANs and the Wasserstein adversarial loss. We test the method by synthesizing every MRI sequence in the BRATS 2018 dataset which contains T1, T2, Flair and T1 with contrast agent.

Results: Our fusion using attention outperforms the maximum/variance fusion by an average of 0.017 points of structural similarity (SSIM) in all synthesis scenarios, while being able to synthesize images using only a single input modality. Additionally, the attention module brings interpretability by highlighting the most informative locations in the input modalities.

Conclusion: We demonstrate a novel feature fusion method enabling the flexible synthesis of MRI pulse sequences.??

Funding Support, Disclosures, and Conflict of Interest: We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number 534769]

Keywords

MRI, Image Fusion, Image Processing

Taxonomy

IM- Dataset analysis/biomathematics: Machine learning

Contact Email