MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

RmU-Net: A Generalizable Deep Learning Approach for Automatic Prostate Segmentation in 3D Ultrasound Images

N Orlando1,2*, D Gillies1,2, I Gyackov2, C Romagnoli3,5, D D'Souza4,5, A Fenster1-4, (1) Department of Medical Biophysics, Western University, London, ON, CA, (2) Robarts Research Institute, Western University, London, ON, CA, (3) Department of Medical Imaging, Western University, London, ON, CA, (4) Department of Oncology, Western University, London, ON, CA, (5) London Health Sciences Centre, London, ON, CA

Presentations

(Monday, 7/13/2020) 1:00 PM - 3:00 PM [Eastern Time (GMT-4)]

Room: Track 1

Purpose: To develop a robust and generalizable deep learning-based approach for automatic prostate segmentation in 3D transrectal ultrasound (TRUS) images acquired during prostate biopsy and brachytherapy procedures, providing the potential to reduce procedure time and anesthesia risk by eliminating lengthy manual segmentation times.

Methods: Our training dataset consisted of 206 3D TRUS patient images with corresponding manual segmentations, acquired from two procedures (biopsy and brachytherapy), two acquisition geometries (end-fire and side-fire), and four transducers used with three different ultrasound systems. The 3D images were resliced at random planes resulting in 6,773 2D images used to train a modified U-net. Our proposed 3D reconstructed modified U-Net (rmU-Net) involved deep-learning predictions on 2D radial slices, followed by reconstruction into a 3D surface. To compare performance to a standardized network we trained an unmodified 3D V-Net using our dataset. Network performance was evaluated using 20 end-fire and 20 side-fire 3D TRUS images unseen by the networks.

Results: Our proposed method performed with a median [Q1,Q3] Dice similarity coefficient, recall, precision, mean surface distance, and Hausdorff distance of 94.1 [92.6,94.9] %, 96.0 [93.1,98.5] %, 93.2 [88.8,95.4] %, 0.89 [0.73,1.09] mm, and 2.89 [2.37,4.35] mm, respectively. Compared to the standard 3D V-Net and state-of-the-art segmentation algorithms, our proposed method greatly improved performance on nearly all metrics. Segmentation time was <0.7s per 3D image, a vast improvement compared to manual segmentations, which can take up to 30 minutes.

Conclusion: Our proposed algorithm provided fast and accurate 3D segmentations across clinically diverse 3D TRUS images, enabling a generalizable and robust intraoperative solution for needle-based prostate cancer procedures, which can be readily translated into the clinic. This method has the potential to improve workflow efficiency and patient throughput by decreasing physician burden and procedure times, supporting the increasing interest in needle-based procedures for prostate cancer diagnosis and treatment.

Funding Support, Disclosures, and Conflict of Interest: This research was supported by the Ontario Institute of Cancer Research (OICR), the Canadian Institutes of Health Research (CIHR), and the Natural Sciences and Engineering Research Council of Canada (NSERC). N Orlando was supported in part by the Translational Breast Cancer Research Unit.

Keywords

Segmentation, Ultrasonics, Image Processing

Taxonomy

IM/TH- Image Segmentation Techniques: Modality: Ultrasound

Contact Email