Click here to


Are you sure ?

Yes, do it No, cancel

Accelerating MRI Acquisition Using Cascaded Attention UNet with Prior Information

V Agarwal*, J Balter, Y Cao, Univ Michigan, Ann Arbor, MI


(Wednesday, 7/15/2020) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Room: Track 1

MR imaging acquisition, especially T2-weighted scanning, is very slow. The use of faster T1-weighted imaging as a prior, combined with a cascade deep learning framework with attention mechanisms, was explored to accelerate T2-weighted image acquisition through under-sampling.

A cascade of 3 Residual U-Nets with channel-wise attention mechanisms to capture non local context information (CAR-UNet) was developed. Each U-Net was followed by a two-step data consistency module. This network was trained with T1-weighted images as priors and under-sampled T2-FLAIR images (under-sampling factor R=5) as inputs to generate images equivalent to fully sampled T2-FLAIR as output. Training and testing was performed on 7800 slices from 130 patients with gliomas, with a 9:1:3 split for training, validation and testing. Loss function with a weighted sum of Multiscale-SSIM (MSSIM) and L1 was applied to T2 images. Reconstructed T2 images from under-sampled data with R=3, 5, 6 and 8 were tested by comparing structural similarity (SSIM), peak signal to noise ratio (PSNR), and root mean square error (RMSE). The impact of the T1-weighted prior was further examined.

The network achieved SSIM = 0.977, and PSNR = 37.97 using T1-weighted priors at R=5. These results were better than the same network without T1 priors (SSIM = 0.955, PSNR = 34.61), the baseline Cascaded U-Net (SSIM = 0.940, PSNR = 34.31), Deep Cascade CNN (SSIM = 0.951, PSNR = 34.48) and Recursive Dilated Network (SSIM = 0.948, PSNR = 34.62) without T1 priors, specially preserving tumor details. Reconstructions using the under-sampled T2 data with R=8 achieved SSIM = 0.945 and PSNR = 32.97.

The proposed framework enhances results in T2-FLAIR image reconstructions with data under-sampling factors up to 8x. Using channel-wise attention prevents the network from overfitting by capturing non-local context information while T1 prior data and residual learning helps improve structural recovery.

Funding Support, Disclosures, and Conflict of Interest: NIH R01 EB016079


Reconstruction, MRI, Computer Vision


IM- MRI : Machine learning, computer vision

Contact Email