Click here to


Are you sure ?

Yes, do it No, cancel

BEST IN PHYSICS (JOINT IMAGING-THERAPY): Deep Learning Mapping of CT to MRI for Longitudinal Tracking of Lung Tumors for MRI-Guided Radiotherapy

J Jiang*, Y Hu , N Tyagi , P Zhang , A Rimner , J Deasy , G Mageras , H Veeraraghavan , Memorial Sloan-Kettering Cancer Center, New York, NY


(Monday, 7/30/2018) 4:30 PM - 6:00 PM

Room: Karl Dean Ballroom C

Purpose: To train a deep neural network to autosegment and longitudinally track lung tumor volume changes in response to stereotactic body radiation therapy (SBRT) from magnetic resonance images (MRI) using limited MRI datasets.

Methods: We developed an innovative deep learning approach to automatically segment and track lung tumor volumes from MRI that (i) employs unsupervised cross-domain adaptation to synthesize large number of MRI from CT images of unrelated patients, and (ii) combines the synthesized MRI with a small number of real MRI using semi-supervised learning to generate tumor segmentation from MRI. We introduced a novel tumor-aware loss for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs from CT that would otherwise be lost when using state-of-the art domain adaptation networks. We used labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive with unlabeled T2w MRIs from 6 patients scanned before and during treatment (n=36) at our institution for synthesizing MRI. Semi-supervised tumor segmentation was trained using six labeled pre-treatment T2w MRI and the synthesized MRI.

Results: State-of-the art cycle GAN produced the best accuracy of 0.66 ± 0.16 using Dice Score Coefficient (DSC) and 11.91 ± 4.44mm Hausdorff distance (HD95) while our method produced the best accuracy of 0.80 ± 0.08 of DSC and 7.14 ± 4.52 of HD95 on validation. Our method showed no difference from manual delineation (p = 0.25) for longitudinally tracking tumor volume changes in patients imaged during treatment with radiation therapy.

Conclusion: We introduced an adversarial domain adaptation approach using a structure-specific or tumor-aware loss for generating tumor segmentation from MRI despite extremely limited data by combining information from unrelated CT images. Our approach surpassed state-of-the art methods in performance and preliminary results suggest feasibility to automatically track tumor volumes during treatment with radiotherapy.

Funding Support, Disclosures, and Conflict of Interest: This work was supported by Varian Medical Systems.


Segmentation, Radiation Therapy, Pattern Recognition


IM/TH- image segmentation: MRI

Contact Email