MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Accelerating MR Image Acquisition with Sparse Sampling And Integration of Self-Attention Into a Deep Convolutional Neural Network

Y Wu1*, Y Ma2 , J Du2 , J Liu3, D Capaldi1 , L Xing1 , (1) Stanford Univ School of Medicine, Stanford, CA, (2) University of California San Diego, San Diego, CA, (3) University of California San Francisco, San Francisco, CA

Presentations

(Thursday, 7/18/2019) 7:30 AM - 9:30 AM

Room: 303

Purpose: To accelerate MRI acquisition, advanced image reconstruction techniques for sparsely sampled MRI have been extensively investigated. In this regard, recent developments in deep convolutional neural networks have achieved promising results. However, convolution inherently is an operator that makes use of local information. Accordingly, we propose to incorporate global information into image reconstruction by integrating the self-attention mechanism into a deep convolutional neural network.

Methods: Three hundred and sixty three-dimensional images were acquired and retrospectively under-sampled pseudo-randomly. A deep convolutional neural network was developed to provide an end-to-end mapping from under-sampled images to fully sampled images. The network had a hierarchical network architecture that was composed of an encoder and a decoder. Additionally, global shortcut connections were established between the two paths to compensate for details lost in down-sampling, and local shortcut connections were employed within the same level of a single path to facilitate residual learning. Throughout the network, volumetric processing was adopted to exploit spatial continuity in three-dimensional domain. A self-attention layer was integrated into every convolutional block, aiming to make use of global information spread in widely separated regions of a feature map. The attention value at a position was obtained by attending to all positions in the preceding feature map. Residual learning was employed within the self-attention model. Therefore, the output of a convolutional block was composed of both local information provided by feature maps as well as global information given by self-attention maps.

Results: Using the proposed network, a high acceleration factor of six was achieved without apparent degradation in image quality. Losses of micro-structures caused by sparse sampling were substantially recovered. Particularly, incorporating the self-attention mechanism improved image quality significantly.

Conclusion: In this study, a self-attention convolutional neural network was developed to provide improved image reconstruction for sparsely sampled MRI.

Keywords

Not Applicable / None Entered.

Taxonomy

IM- MRI : Image Reconstruction

Contact Email