Room: AAPM ePoster Library
Purpose: To develop a novel tumor and multi-organ segmentation technique in CT based on Deep Learning for radiation therapy after breast-conserving surgery
Methods: CT images and RT structure files (RS) of 400 patients who underwent radiation therapy after breast-conserving surgery were obtained as pairs under IRB approval. Among them, 320 patients' data were used in training stage and the other 80 patients' data were used in test stage. Each 3D segmentation map (target) was generated based on contours of breast tumor and three organs at risk (OARs) which are left lung, right lung, and heart, acquired from RS reviewed by four radiation oncologists independently. To develop Deep Learning model for auto-segmentation, we implemented a published 3D Convolutional Neural Network architecture (SCNAS-Net) optimized by Scalable Neural Architecture Search (SCNAS), which is one of NAS frameworks for automated machine learning (AutoML). In the training stage, among 320 patients’ data, 64 patients’ data were randomly divided and utilized as a validation set. After training the SCNAS-Net, dice coefficients (DICE) between output from the trained SCNAS-Net and target were calculated to evaluate the segmentation performance.
Results: The trained SCNAS-Net showed superior segmentation performance for all regions. DICEs for the breast tumor, left lung, right lung and heart were 0.8327, 0.9771, 0.981 and 0.9351, respectively. Especially for the breast tumor, despite the difficulty of segmentation due to the various shape, volume and location of the breast compared to other organs, DICE for breast tumor was significantly high.
Conclusion: A novel tumor and multi-organ segmentation technique in CT was successfully developed. It could significantly reduce the time for the manual contouring of tumor and OARs, which is one of key steps in radiation therapy planning. This study indicates the potential for application of the proposed SCNAS-Net in real clinic as auto-segmentation system in the near future.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (Ministry of Science and ICT) under Grant NRF-2019R1C1C1008562.