MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Deep Learning for Automated Quantification of Tumor Phenotypes

A Hosny1*, T Coroller1 , P Grossmann1 , C Parmar1 , R Zeleznik1 , A Kumar1 , J Bussink2 , R Gillies3 , R Mak4 , H Aerts1 , (1) Department of Radiation Oncology, Dana-Farber Cancer Institute, Brigham and Women Hospital, Harvard Medical School, Boston, MA, (2) Radboud University, Nijmegen, Nijmegen, (3) Moffitt Cancer Center, Tampa, FL, (4) Brigham and Women's Hospital, Boston, MA

Presentations

(Wednesday, 8/1/2018) 10:15 AM - 12:15 PM

Room: Room 207

Purpose: Recent advances in artificial intelligence, deep learning in particular, have shown remarkable progress in many fields ranging from speech recognition to autonomous vehicles. Within radiology, deep learning has the potential to automatically learn and quantify radiographic characteristics of underlying tissues. In this study, we investigated the clinical utility of convolutional neural networks (CNN) in quantifying the radiographic phenotype of non-small cell lung cancer (NSCLC) tumors in computed tomography data.

Methods: We performed an integrative analysis on seven independent cohorts totaling 1213 patients. We identified and independently validated prognostic signatures using 3D CNN's for patients treated with radiotherapy (n=777). We then employed a transfer learning approach to achieve the same for surgery patients (n=404). We benchmarked CNN's performance against state-of-the-art machine learning algorithms that rely on engineered features. We also test the CNN's stability within test-retest and inter-reader variability scenarios. To gain a better understanding of the characteristics captured by CNN's, we map salient regions in images as per their contributions to predictions.

Results: We found that CNN's have a strong prognostic power in predicting 2-year overall survival for patients treated with radiotherapy (AUC=0.70, p=1.13x10���) and surgery (AUC=0.71, p=3.02x10���). The engineered feature models demonstrated lower performance compared to CNN's (AUC=0.66, p=1.91x10���) and (AUC=0.58, p=0.275) for the radiotherapy and surgery cohorts respectively. We also demonstrated the networks' high stability against test-retest (ICC=0.91) and inter-reader (Spearman's Rank-Order Correlation=0.88) variations. Furthermore, we found that areas both within and beyond the tumor volume were informative for predictions.

Conclusion: Our results highlight the improved performance of deep learning over its traditional counterparts and its robustness against variability. These results argue for the integration of deep learning approaches into the clinical practice, given their ability to predict tumor clinical characteristics noninvasively using standard-of-care medical images.

Keywords

Image Analysis, CT, Computer Vision

Taxonomy

IM/TH- Image Analysis (Single modality or Multi-modality): Computer/machine vision

Contact Email