MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

High-Resolution Ultrasound Imaging Reconstruction Using Deep Attention Generative Adversarial Networks

Y Lei , X He*, Y Liu , Z Tian , T Wang , W Curran , T Liu , X Yang , Emory Univ, Atlanta, GA

Presentations

(Wednesday, 7/17/2019) 4:30 PM - 6:00 PM

Room: 303

Purpose: A routine 3D transrectal ultrasound (TRUS) volume is usually captured with large slice thickness (e.g. 2-5mm). Such ultrasound images with low out-of-slice resolution affect contouring and needle/seed detection in prostate brachytherapy. The purpose of this study is to develop a deep-learning-based method to construct high-resolution images from routinely captured prostate ultrasound images for brachytherapy.

Methods: We propose to integrate a deeply supervised attention model into a Generative Adversarial Network (GAN)-based framework to improve ultrasound image resolution. During the training stage, we first upsample the low-resolution image size to that of high-resolution images by bicubic interpolation and extract 3D patches from them. Deep attention GANs are introduced to enable end-to-end encoding-and-decoding learning. Next, an attention model is used to retrieve the most relevant information from an encoder. The residual network is used to learn the difference between low- and high-resolution images. Convolutional-neural-network-based discriminators are used to verify the realism of the output of decoders from the high-resolution image. During the reconstruction stage, we extract low-resolution patches from a new TRUS image and feed them into the trained networks for high-resolution patch generation. Finally, the enhanced TRUS image is reconstructed in a patch-wise fusion. This technique was validated with 20 patients. We performed a leave-one-out cross-validation method to evaluate the proposed algorithm. Our reconstructed, high-resolution TRUS images from downsampled images were compared with the original image to quantitatively evaluate the performance.

Results: The mean absolute intensity error (MAIE) and peak signal to noise ratio (PSNR) indexes between reconstructed and original images were 6.5±0.5HU and 38.0±2.4dB respectively, which demonstrates the accuracy of the proposed method.

Conclusion: We have investigated a novel deep-learning-based approach to improve the image resolution of routine TRUS and demonstrated its reliability. The proposed method has great potential in facilitating routine brachytherapy workflow and improving radiation dose calculation and delivery accuracy.

Funding Support, Disclosures, and Conflict of Interest: NIH R01 CA215718

Keywords

Not Applicable / None Entered.

Taxonomy

IM- Ultrasound : Quantitative imaging/analysis

Contact Email