MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Prediction of Gleason Grade Group of Prostate Cancer On Multiparametric MRI Using Deep Learning, Transfer Learning, Representation Learning and Prototype Network

W Zong*, M Pantelic, E Mohamed, N Wen, Henry Ford Health System, Detroit, MI

Presentations

(Wednesday, 7/15/2020) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Room: Track 1

Purpose: We aim to develop a deep learning (DL) algorithm to predict Gleason Group (GG) using multiparametric magnetic resonance images (mp-MRI).

Methods: We trained on cohort A with 201 patients and 320 lesions from the SPIE-AAPM-NCI PROSTATEx Challenge, among which 98 patients with 110 lesions with GG available from biopsy. And the number of lesions in each GG subgroup was 36, 39, 20, 8, and 7, respectively, for GG 1-5. Three b-values were acquired (50, 400, and 800 s/mm2). Image rotation and scaling were used to increase the sample size and re-balance the number of lesions in various GG. Cohort B was from our own institution to test the model’s robustness with an independent dataset and consisted of 40 patients and 99 lesion patches (45, 22, 9, 8, 15 for GG1-5). Three b values (0, 1,000 and 1,500 s/mm2) were acquired on this cohort.
The ResNet-50 based classifier was pretrained on randomly selected 90% patients of cohort A to separate GG <=3 and GG4-5. Lesion patches from both cohorts were then embedded using features extracted from the last convolutional layer. Prototypes for each GG were constructed by averaging the features extracted from the last convolutional layer of all lesions belonging to that group. The GGs for patients in cohort B were predicted based on the similarity to the prototype of each GG.

Results: The Classification accuracy on the validation set for separating GG <=3 and GG4-5 was 90%. For cohort B, the sensitivity and positive prediction value for the GS 1-3 and 4-5 were (0.70, 0.81) and (0.6, 0.36) respectively.

Conclusion: This work designed a DL architecture for GG prediction from mp-MRI with unbalanced training sample sizes. Patients’ images were represented by extracting features from a deep learning classification model pretrained on a closely related task with more labelled data.

Funding Support, Disclosures, and Conflict of Interest: The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.

Keywords

Not Applicable / None Entered.

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Computer-aided decision support systems (detection, diagnosis, risk prediction, staging, treatment response assessment/monitoring, prognosis prediction)

Contact Email