Room: Karl Dean Ballroom C
Purpose: In our group we have used 2D U-Net to segment organs in male pelvis on CT images and achieved improved results compared to existing work. In this work we explore the possibility of further improvement using a 3D deep learning architecture.
Methods: The architecture consists of a 2D localization U-Net followed by a 3D segmentation U-Net for volumetric segmentation. As a first step, we train a 2D U-Net to determine the organ locations and then select the 3D volume of interest as the input for the 3D U-Net. The models were trained on a pelvic CT dataset comprising of 176 patients. 80% of the data was used for training (15% of which was used for validation) and 20% was used for testing. The initial 2D localization U-Net is shallow and does a very quick localization. The architecture of the networks includes multiple convolution layers, dropout layers, ReLu layers, multiple filters, and activation functions. The 3D U-Net makes use of aggregated residual networks or ResNeXt.
Results: Test results show that 3D U-Net based segmentation achieves Dice scores over 0.9 for the prostate (SD=.07) with computation times below a few seconds per volume. The left femoral head and right femoral head segmentations have a mean (Â±SD) Dice score of .96(Â±.02) and .95(Â±.02)% while bladder and rectum have 0.95(Â±.03) and 0.85(Â±.04).
Conclusion: Most of the automated segmentation techniques used for prostate segmentation requires the physician to manually specify the first and last slices of the prostate in the image space. This 2D-3D hybrid network requires only the CT images as input (no manual intervention) and not only obtains superior segmentation performance (i.e., higher Dice ratio) compared with the state-of-the-art methods (including our own 2D model), but also demonstrates its capability in dealing with irregular prostates and rectums.
Funding Support, Disclosures, and Conflict of Interest: Cancer Prevention and Research Institute of Texas (CPRIT) (IIRA RP150485) grant