Purpose: Segmentation of the prostate gland on CT is challenging due to low image contrast between the prostate and the surrounding tissue. For radiation therapy treatment planning, accurate delineation of the boundaries of the prostate is essential. Despite the improved tissue-contrast provided by MRI, CT is still the chosen modality for prostate segmentation prior to radiotherapy. Convolutional Neural Networks such as Unets  have been used successfully for many medical imaging segmentation tasks. We applied a novel III phase transfer learning approach to fully automatic CT prostate volume segmentation.
Methods: A total of 93 CT prostate cases were manually segmented by a radiologist (81-training and validation, 12-testing). Instead of using a single slice to predict segmentation we utilized adjacent slices above and below the slice of interest as multichannel input during training and testing processes. This 2.5D technique allows the network to incorporate anatomical information from adjacent slices similar to the 3D network with less computational power required and more time efficiency. Network training was performed over three phases, phase-I: MR images and synthetic CT; phase-II: synthetic CT; phase-III: real CT images to generate masks from CT.
Results: For 12 test patients we achieved the Dice score of 0.7364Â±0.2113, 0.7550Â±0.1965, and 0.8158Â±0.1849 for single slice (0), one adjacent slice (Â±1), and two adjacent slices (Â±2) respectively.
Conclusion: We developed a CT prostate segmentation algorithm with a high Dice score using transfer learning on a 2.5D Unet with short skip connections. The network achieved good performance from our multi-phase transfer learning approach. Future works include a quantitative comparison with no transfer learning 2.5D Unet and to derive prostate gland volume base on these segmentations.