Click here to


Are you sure ?

Yes, do it No, cancel

Improved Glioblastoma Survival Prediction Using Deep Learning-Based Radiomic Features From Preoperative Multimodal MR Images

J Fu*, K Singhrao , X Zhong , X Qi , Y Yang , D Ruan , J Lewis , UCLA School of Medicine, Los Angeles, CA


(Monday, 7/15/2019) 1:45 PM - 3:45 PM

Room: Stars at Night Ballroom 2-3

Purpose: To compare the performance of handcrafted and deep learning-based radiomic features extracted from preoperative multimodal images for survival prediction in glioblastoma multiforme (GBM) patients.

Methods: 163 GBM patients with overall survival (OS) data were acquired from the BRaTS2018 dataset. Each patient had four preoperative MR scans with the tumor contour approved by board-certified neuroradiologists. The patient cohort was randomly split into a training set of 120 patients and a testing set of 43 patients. Handcrafted features and deep learning-based features were extracted using conventional computer-aided approaches and a pretrained convolutional neural network (VGG-19) respectively. Cox proportional hazards-models with least absolute shrinkage and selection operator (LASSO) penalties were constructed using training patients under a ten-fold cross-validation protocol. A radiomic signature, in the form of a linear combination of selected features, was generated for each feature group. The model performance was evaluated by computing the concordance index (C-index) using the testing set. A cutoff-point, derived from the training set, on the signature score was used to stratify testing patients into high-risk and low-risk groups. Kaplan-Meier survival analysis was utilized to evaluate the statistical significance of OS differences between the two risk groups.

Results: The signature constructed using handcrafted features yielded a C-index of 0.545 (95% CI: 0.451-0.640), while the one constructed using deep learning-based features achieved a C-index of 0.665 (95% CI: 0.590-0.740). A paired t-test on C-index indicated a significant difference (P=0.011). The handcrafted signature did not achieve significant stratification of testing patients (p=0.482, HR=1.242, 95% CI: 0.655-2.353), while the deep learning-based signature did (p<0.001, HR=3.260, 95% CI: 1.500-7.085).

Conclusion: Deep learning-based features extracted from pre-operative multimodal MR images achieved better OS prediction performance and patient stratification for GBM patients compared to handcrafted features. Our study demonstrated the potential of using deep learning-based biomarkers for GBM preoperative care.

Funding Support, Disclosures, and Conflict of Interest: Varian master research agreement


Feature Extraction, MRI


IM/TH- Image Analysis (Single modality or Multi-modality): Imaging biomarkers and radiomics

Contact Email