Click here to


Are you sure ?

Yes, do it No, cancel

Cross-Modality (MR-CT) Educed Deep Learning (CMEDL) for Segmenting Lung Tumors On CBCT

H Veeraraghavan*, J Jiang, P Zhang, A Rimner, J Deasy, Memorial Sloan Kettering Cancer Center, New York, NY


(Monday, 7/13/2020) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Room: Track 2

Purpose: To develop an automated segmentation of non-small cell lung cancers (NSCLC) on CBCT images by leveraging the superior soft-tissue contrast on MRI without access to multi-modality datasets.
Method: We developed and validated a cross-modality deep learning approach that distilled high soft-tissue contrast information from MRI to improve CBCT segmentation. Our approach combines a generative adversarial network (GAN) trained with CBCT and MRI segmentation networks to perform unpaired cross-modality distillation. All networks are trained in an end-to-end manner such that the losses in segmentation and image-to-image (I2I) translation can regularize training of I2I translation, cross-modality distillation (MR to CBCT) and segmentation. The GAN network training is regularized by contextual similarity loss, which computes errors in features extracted from the whole images (CBCT and transformed MRI), and structure-specific shape and segmentation feature losses. Cross-modality distillation is implemented by matching the high-level segmentation features computed from the last two layers of the MRI and CBCT segmentation (Unet) networks. Independent training (CBCT = 10 TCIA and 33 from our institute, T2w MR = 81 from our institute), validation (CBCT = 10) and testing (CT = 10) were used in the analysis. Accuracy was evaluated by comparing with expert delineations using Dice similarity coefficient (DSC), and Hausdroff distance at 95th percentile (HD95). Performance comparisons were done against CBCT only segmentation.
Result: Our approach resulted in significantly more accurate segmentation (DSC 0.71 ± 0.20, HD95 7.53 ± 5.74mm) than CBCT-only (DSC 0.64 ± 0.21, HD95 13.32 ± 13.89 mm) using both metrics (P < 0.001) on the test set.
Conclusion: Preliminary results from our approach showed that using learned cross-modality information from more informative MRI to train CBCT segmentation resulted in significant performance improvement compared with CBCT alone, and without needing simultaneously acquired multi-modality datasets for training or testing.


Not Applicable / None Entered.


Not Applicable / None Entered.

Contact Email