Room: Room 207
Purpose: Accurate segmentation of organs-at-risks is a key step for radiation treatment planning for head and neck cancers. Deep convolutional neural network (DCNN) has shown promises in many medical image segmentation applications, but many existing methods use 2D slices or extracted patches with limited information. The purpose of this study is to develop and validate a 3D DCNN method for segmentation of head and neck CT images.
Methods: 3D CT images of 40 subjects were used in this study with ground truth contours for brainstem, chiasm, mandible, left and right optic nerve, left and right parotid. 26 were randomly selected for training and 14 for testing. To reduce the variability in scanning parameters, the images were rescaled to have the same voxel spacing and cropped to the same size before feeding into the 3D U-Net based network model. Non-uniform weighted cross entropy loss was used to reduce the effect of label imbalance. Random affine transformation was applied during training to reduce overfitting. Dice scores, mean surface distance and 95% Hausdorff distance were calculated to evaluate the segmentation performances.
Results: The mean dice scores for all the ROIs were: brainstem: 0.85, chiasm: 0.59, mandible: 0.93, left optic nerve: 0.64, right optic nerve: 0.65, left parotid: 0.84, right parotid: 0.83. The mean surface distance was 1.5 mm and the mean 95% Hausdorff distance was 4.23 mm. Since mandible is the only bony structure with strong image contrast, the performance was very well. Smaller structures such as chiasm and optic nerves had lower dice scores, however, the small errors in distances showed that the model could accurately locate these ROIs.
Conclusion: A 3D deep convolutional neural network was developed and validated for segmentation of head and neck organs. Results were promising towards minimal or no human interaction.
Image Processing, Segmentation