Room: ePoster Forums
Purpose: The deep learning models for auto-segmentation of regions in CT images are usually trained separately for target volumes and OARs. There is no high-level information (context, relative position, shape, etc.) is exploited in the whole process. In this paper, we propose an independent training of multiple stacked FCN networks which use the output of the front-end network by using alpha channel or color channel coding as the input of the post-network to improve the auto-segmentation accuracy.
Methods: Stacked FCNs which contains several stages of FCN modules were trained on Caffe. Instead of composing the input matrix, we directly superimpose the output of the previous network with alpha overlay or color channel information onto the original input CT image. The intermediate information of the stacking network is composed of both the inclusion relationship from coarse to fine and the exclusion relationship (e.g. relationship between CTV and organs). Fifty cervical cancer patients who received IMRT at the West China Hospital Cancer Center from January 2017 to May 2018 were enrolled to create 3D CT scan datasets with delineations of CTV and OARs, and 4/5 of them were used for training, 1/5 of them were used for validation. Dice similarity coefficient (DSC) was used to quantitatively analyze the segmentation accuracy.
Results: Based on the judgment from OARs, the average validation DSC of the auto-delineation for small intestine was 88.44% with no fusing, 88.47 with fusing (alpha=0.3, color, without small intestine), 93.68% with fusing (alpha=0.3, color, with small intestine), respectively. Meanwhile, based on the judgment from OARs, the final DSC of the auto-delineation for CTV can be improved nearly 1% with fusing (alpha=0.3, color).
Conclusion: Our practice shows that any known real high-level context information can be added to the deep learning network to improve auto-segmentation accuracy.
Contour Extraction, Image Processing