Room: Exhibit Hall | Forum 2
Purpose: Manual segmentation for liver can be time-consuming and tedious. The goal is to develop an accurate and efficient automated segmentation algorithm for liver.
Methods: An automatic liver segmentation model based on 3D anisotropic convolutional neural network (CNN) has been developed. The network is an encoder-decoder convolutional neural network based on dilated convolution, residual network and Atrous Soatial Pyramid Pooling (ASPP) module, and was successfully applied to the liver segmentation task. After segmenting the CT data on axial, coronal and sagittal views, we then integrated them using the anisotropic network to generate the final segmentation binary image. To avoid the parameters be randomly initialized in training from destroying the parameters in the pre-training model, we proposed the Paced Transfer Learning algorithm (PTL). Different from the traditional learning algorithms, the PTL trains different convolution layers according to stages. In the initialization phase, the PTL fixes the model parameters migrated in the learning algorithm, only trains the parameters of the random initialization. When the model performance is stable, the current model tends to be the local optimal solution, all the parameters in the model are released and trained at the same time. We evaluated the performance of the model using the public dataset called Liver Lesion Segmentation Challenge (LiST). The image segmentation evaluation index, such as Mean Intersection over Union(mIoU), Dice coefficient, etc., were used to evaluate the model.
Results: The proposed algorithm achieved a Dice score of 0.951 and mIoU of 0.945, surpassed all other algorithms in published literatures, suggesting the proposed algorithm could effectively segment liver from CT images and achieve a stage-of-the-art performance.
Conclusion: The algorithm of 3D anisotropic deep CNN model achieved excellent performance for automated liver segmentation on CT. The proposed model could be applied to other image modalities and other anatomic sites.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Natural Science Foundation of China (Grant No. 61702414), and the special funds for key disciplines in Shaanxi Universities and colleges.