MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

3D Ultrasound Prostate Segmentation Using 3D Deeply Supervised V-Net

X Yang*, Y Lei , S Tian , T Wang , A Jani , W Curran , P Patel , T Liu , Emory University, Atlanta, GA

Presentations

(Tuesday, 7/31/2018) 9:30 AM - 10:00 AM

Room: Exhibit Hall | Forum 5

Purpose: Transrectal ultrasound is a versatile and real-time modality that is commonly used in image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and observer-dependent. To address these drawbacks, we have developed a deep learning-based method which integrates deep supervision into 3D patch-based fully convolutional neural network (called V-Net) for prostate segmentation

Methods: A 3D patch-based V-Net is developed which enables per-voxel-wise error back-propagation. Dense predications is obtained from this proposed V-Net which keeps the same size between the input and prediction patches. A 3D supervision mechanism is then integrated into the V-Net hidden layers of to deal with the optimization difficulties when training a deep network with limited training data. Finally, we introduce the negative log-likelihood loss and the batch-based Dice loss of dense predictions into the whole loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively label the prostate tissue. The final segmented prostate volume is reconstructed by patch fusion and refined by contour refinement.

Results: Our deep learning-based segmentation technique was clinically validated using 22 patients. The accuracy of our approach was assessed against manual segmentation (ground truth). The mean volume Dice similarity coefficient was 90.18±0.90% between the deep learning-based and manually segmented volumes. The mean surface distance between the two segmented volumes was 0.36±0.07mm.

Conclusion: We have developed a novel prostate segmentation approach based on the 3D deeply supervised convolutional neural network framework and validated its accuracy using manual segmentation. This segmentation technique could be a useful tool in ultrasound-guided interventions for prostate-cancer diagnosis and treatment.

Keywords

Not Applicable / None Entered.

Taxonomy

IM/TH- image segmentation: Ultrasound

Contact Email