MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

A Method to Improve Organ Segmentation Between Medical Centers Using a Small Amount of Training Data

K Men1, J Zhu1, X Chen1, Y Yang2, J Zhang2, J Yi1, M Chen2, J Dai1*, 1.National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences, Beijing,China 2.Zhejiang Cancer Hospital, University of Chinese Academy of Sciences, Hanzhou, China

Presentations

(Sunday, 7/12/2020)   [Eastern Time (GMT-4)]

Room: AAPM ePoster Library

Purpose: neural networks (CNNs) offer a promising approach to automating organ segmentation in radiotherapy. However, variations in datasets between medical centers can mean that CNN models may not generalize well to other centers. Here, we investigated whether transfer learning could improve the generalizability of a CNN segmentation model.

Methods: patient data included 300 cases (S_Train) from one institution (the source center) and 60 cases from another (the target center), divided into a training set of 50 cases (T_Train) and a test set of 10 target cases (T_test). A CNN with 103 convolutional layers was trained to perform segmentation of the parotid gland in computer tomography imaging. We first trained Model_S and Model_T from scratch with the datasets S_Train and T_train, respectively. Transfer learning was then used to train Model_ST by fine-tuning Model_S with images from T_Train. We also investigated the effect of the numbers of re-trained layers and training cases on the performance of Model_ST.

Results: achieved a Dice similarity coefficient (DSC) of 0.855 ± 0.031 when applied to data from the source center. When Model_S, Model_T, and Model_ST were applied to the T_Test dataset, the DSCs were 0.816 ± 0.037, 0.829 ± 0.036, and 0.853 ± 0.033, respectively. Transfer learning using only 10 training cases achieved comparable performance to that of Model_T trained from scratch with all 50 cases (DSC, 0.825 ± 0.033 vs. 0.829 ± 0.036). The performance of Model_ST improved with increased numbers of re-trained layers and training cases. With the networks with 11 re-trained layers, 40 to 50 cases were enough to achieve good results; this reduced training time by up to 33%.

Conclusion: learning can improve segmentation by adapting a previously trained CNN model to a new image domain, reducing the training time greatly and saving physicians from labeling a large number of contours.

Funding Support, Disclosures, and Conflict of Interest: This work was supported by the Beijing Hope Run Special Fund of Cancer Foundation of China (LC2019B06, LC2018A14), the Beijing Municipal Science & Technology Commission (Z181100001918002), and the National Natural Science Foundation of China (11975313).

Keywords

Segmentation

Taxonomy

IM/TH- Image Segmentation Techniques: Segmentation Method - other

Contact Email