Room: Track 2
Purpose: To develop a novel deep learning-based automatic lung tumor segmentation method utilizing both CT and PET images to reduce interobserver variabilities and improve clinical efficiency for radiotherapy planning.
Methods: Our segmentation network was constructed based on 2D U-Nets. In two parallel convolution arms, features extracted separately from the planning CT and the PET images at multiple resolutions levels were concatenated and fed into a single deconvolution path at the corresponding resolution. The network produces a tumor probability map which was thresholded to obtain the tumor mask. The CT and PET images were rigidly registered during preprocessing. Our network was trained/validated on a dataset of 213 lung cancer patients with manual physician contours as the ground truth (116 SBRT and 97 conventionally-fractionated cases). The dataset was split into train/validation/test by 3:1:1. Our method was compared, via the Dice similarity coefficient (DSC), with a previous study that proposed a network consisting of two independent U-Nets and a feature fusion convolutional network taking the element-wise sum of the previous network outputs as input.
Results: Our initial model achieved a mean DSC of 0.772 ± 0.096, higher than the previously proposed feature fusion network (DSC = 0.756 ± 0.075). Specifically, we observed better detection of multiple masses and more accurate segmentation of tumors with significant PET heterogeneity. In addition, based on an observed low DSC for some cases with small tumors, we performed dataset stratification by tumor volume. Small tumors below 25 mL (typically treated with SBRT) benefited from separate training/validation from the large tumors. Stratification improved the overall DSC to 0.809 ± 0.065.
Conclusion: The proposed multi-modality deep learning network can automatically contour lung tumors more accurately than the previous method by combining PET with CT image information. Dataset stratification can further improve the performance of our network.
Funding Support, Disclosures, and Conflict of Interest: This work was not supported by any fundings. R Mahon and E Weiss receive research grant support from Varian and NIH. E Weiss receive royalties from UpToDate. S Wang and L Yuan have no disclosures.