Room: AAPM ePoster Library
While machine learning-based auto-segmentation has achieved clinically acceptable results for many structures, it often lacks the report of a spatial confidence level, which can draw human expert’s attention to review and revise specific regions. This is more important for the structure difficult to contour, such as the prostate bed, which is often referred to as an “invisible” target. In this work, we propose a Bayesian U-net to achieve fast and accurate prostate bed segmentation while providing an uncertainty map to show the confidence level of the predicted contours.
The proposed Bayesian U-net was derived from the standard U-net by adding the Monte Carlo (MC) dropout layers to the middle convolutional blocks. These MC dropout layers work in both training and testing phase, randomly blocking half input nodes. By performing T times forward passes through the Bayesian U-net, we can get T different predictions from the same input image. The final segmentation and its uncertainty map can be represented by the mean and standard deviation of the T predictions, respectively. A clinical dataset consisting of 186 post-prostatectomy planning CTs with a five-fold cross-validation strategy was used for model training and evaluation.
The global dice similarity coefficient (DSC) and average symmetric surface distance (ASD) of the proposed method on the prostate bed are 74.23±7.39% and 2.61±1.27mm, respectively. The average processing time is 10.8 seconds per CT image. Meanwhile, our method generates an uncertainty map together with the segmentation result, indicating a spatial confidence level of the predictions.
We developed a Bayesian U-net for the accurate segmentation of the prostate bed on CT images. Moreover, this method can generate an uncertainty map of the predicted contour, providing more reference information to the clinicians for manual modification.
Funding Support, Disclosures, and Conflict of Interest: The research is in part supported by NIH grant 1R01CA206100.