Room: Exhibit Hall | Forum 8
Purpose: To create a prediction framework that is based on a conditional adversarial network with a generator and a discriminator for multi-contrast MRI images.
Methods: A total 2024 data set consisted of the brain T1 and T2 weighted MRI images from 104 patients. The data were split into two sets: 1514 images (70 patients) for training of models and 510 images (34 patients) for model testing. The predicted framework that is based on a conditional adversarial network with a generator and a discriminator. Given an input image, generator learns to generate the image of the same anatomy in a target contrast. Meanwhile, the discriminator learns to discriminate between the predicted image and real pairs of multi-contrast images. Both subnetworks are trained simultaneously, where generator aims to minimize a pixel-wise and an adversarial loss function, and discriminator tries to maximize the adversarial loss function. The prediction frameworks of T1-to-T2 weighted images translation and T2-to-T1 weighted images translation were created. The evaluation of the image translation for mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity Index (SSIM) and mutual information (MI) was performed with originally automated analysis system in Python code.
Results: Average values for T1-toT2 and T2-to-T1 weighted images translation were 27.53 and 25.81 for mean absolute error, 24.67 and 26.53 for peak signal-to-noise ratio, 0.83 and 0.87 for structural similarity Index, 1.18 and 1.34 for mutual information, respectively.
Conclusion: Our prediction framework approach can help improve quality and versatility of multi-contrast MRI exams without the need for prolonged examinations.