MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Cross-Modality (MR-CT) Educed Deep Learning (CMEDL) for Segmentation of Lung Tumors On CT

J Jiang1*, N Tyagi2 , Y Hu3 , A Rimner4 , S Berry5 , J Deasy6 , H Veeraraghavan7 , (1) MSKCC, New York, NY, (2) Memorial Sloan-Kettering Cancer Center, New York, NY, (3) Memorial Sloan Kettering Cancer Center, New York, NY, (4) Memorial Sloan-Kettering Cancer Center, New York, NY, (5) Memorial Sloan Kettering Cancer Center, New York, NY, (6) Memorial Sloan Kettering Cancer Center, New York, NY, (7) Memorial Sloan Kettering Cancer Center, New York, NY

Presentations

(Sunday, 7/14/2019) 1:00 PM - 2:00 PM

Room: 303

Purpose: To develop an accurate segmentation approach of non-small cell lung cancers (NSCLC) on CT by leveraging the superior soft-tissue contrast on MRI without access to multi-modality datasets.

Methods: We developed and validated a cross-modality deep learning approach to improve the segmentation accuracy on CT images by leveraging learned cross-modality deep learning priors modeling the soft-tissue anatomical relationships between CT and MRI. These priors increase the knowledge on CT by incorporating learned MR information. Our approach jointly optimizes cross-modality prior modeling for pseudo MR (pMR) generation and U-net based segmentation combining CT with pMR. The novel components include contextual similarity loss for accurate modeling of (CT-MR) tissue relationships from unrelated CT and MR datasets lacking spatial correspondence and modality integration using direct modality concatenation, weighted concatenation and indirect combination through modality feature matching methods. Weighted combination employs a sub-network that determines modality weights per case based pMR generation accuracy. Independent training (CT = 377 TCIA, T2w MR = 81 from Internal), validation (CT = 304), and testing (CT = 333) was used to evaluate lung tumor segmentation. Accuracy was evaluated by comparing with expert delineations using Dice similarity coefficient (DSC), and Hausdorff distance at 95th percentile (HD95).

Results: The indirect modality combination produced significantly more accurate segmentation (DSC 0.72 ± 0.14, HD95 8.22 ± 6.89mm) than CT-only (DSC 0.68 ± 0.17, HD95 9.35 ± 7.08 mm) using both metrics (P < 0.001) on the test set.

Conclusion: Our novel approach using cross-modality educed segmentation showed significant improvement in the segmentation of lung tumors on a reasonably large number of tumors compared to CT-only method. Our results show that the issue of low-soft tissue contrast on CT can be overcome through such cross-modality priors and more accurate CT segmentations can be achieved despite not having access to multi-modality datasets..

Funding Support, Disclosures, and Conflict of Interest: Sean Berry/Jue Jiang/Harini Veeraraghavan/Yu chi Hu received grants from Varian Medical System, Andreas Rimner get found from Varian Medical Systems,Jue Jiang/Harini Veeraraghavan/Deasy O Joseph partially supported by NCI R01 CA198121

Keywords

Segmentation, Lung

Taxonomy

IM/TH- image segmentation: CT

Contact Email