MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Longitudinal Segmentation of Parotid Gland Changes From MRI Through Unsupervised Cross-Modality Deep Learning Through Structure-Specific Appearance Constraints

J Jiang1*, H Um2 , Y Hu3 , N Tyagi4 , C Wang5 , N Lee6 , J Deasy7 , S Berry8 , H Veeraraghavan9 , (1) MSKCC, New York, NY, (2) Memorial Sloan Kettering Cancer Center, New York, NY, (3) Memorial Sloan Kettering Cancer Center, New York, NY, (4) Memorial Sloan-Kettering Cancer Center, New York, NY, (5) Memorial Sloan Kettering Cancer Center, New York, NY, (6) Memorial Sloan Kettering Cancer Center, New York, NY, (7) Memorial Sloan Kettering Cancer Center, New York, NY, (8) Memorial Sloan Kettering Cancer Center, New York, NY, (9) Memorial Sloan Kettering Cancer Center, New York, NY

Presentations

(Tuesday, 7/16/2019) 1:45 PM - 3:45 PM

Room: Stars at Night Ballroom 2-3

Purpose: To develop an automatic method for segmenting and tracking parotid gland volumes from MRI without expert-segmented MR training sets during radiation therapy.

Methods: We developed and validated a new approach for generating automatic volumetric segmentation and tracking of parotid gland volumes from T2-weighted fat suppressed (T2wFS) MRI. Our approach learns to generate automatic segmentation of parotid glands from MRI without any expert-segmented MRI and using clinical segmentations on CT datasets from unrelated patients. We introduced a new loss called, the Structure Appearance Constrained Segmentation (SACS) loss that overcomes the limitations of frequently used Cycle GAN method by preserving both the spatial structure and textual appearance of the structures of interest on the adversarially translated images. This is achieved by iteratively focusing the discriminator on regions corresponding to the structures of interest, indicated by voxel-wise structure probability. We trained our approach on 48 T2wFS internal data and 48 CT datasets from external PDDCA and used independent validation and testing using 30 and 27 patients, respectively with 3 to 8 weekly MR scans during treatment. The segmentation accuracy was evaluated by comparing against clinical delineations using Dice similarity coefficient (DSC) and the Hausdorff distance at 95th percentile (HD95). We benchmarked the performance of our method against the CycleGAN method.

Results: Our method produced significantly more accurate segmentation than the cycle GAN method (P < 0.001 ) with (DSC 0.78±0.07, HD95 3.39±1.08mm) on the test set. The volume changes compared to the baseline volume were comparable to clinical delineations (P=0.0427).

Conclusion: We developed a new approach to perform unsupervised training to segment and track parotid gland volumes during radiation therapy. Our approach shows the potential to achieve reliable segmentations without needed expert-segmented datasets on every imaging modality for training deep networks.

Funding Support, Disclosures, and Conflict of Interest: Sean Berry/Jue Jiang/Harini Veeraraghavan/Yu chi Hu received grants from Varian Medical System,Jue Jiang/Harini Veeraraghavan/Deasy O Joseph partially supported by NCI R01 CA198121

Keywords

Segmentation, MRI

Taxonomy

IM- MRI : Multi-modality MRI-CT

Contact Email