Click here to


Are you sure ?

Yes, do it No, cancel

Deep Learning in the Classification of Thoracic Radiographic Views to Enable Accurate and Efficient Clinical Workflows

J Crosby*, T Rhines , F Li , H MacMahon , M Giger , University of Chicago, Chicago, IL


(Thursday, 8/2/2018) 10:00 AM - 12:00 PM

Room: Room 207

Purpose: DICOM header information is frequently used to classify image types within the clinical radiological workflow, however, if a header is missing fields or contains incorrect data, the header can no longer be reliably used for classification. In order to expedite image transfer and interpretation, we trained a convolutional neural network in the task of classifying chest radiographs into 2 categories: PA/AP images and all other associated views from the same study, such as lateral, as well as soft tissue and bone for dual energy studies.

Methods: A set of 1,909 radiographs acquired between February 2006 and February 2017 were manually sorted into the two categories. Attempts to classify the 1,909 images using DICOM header information left 34% unsorted. The manually sorted set consisted of 818 AP/PA images and 1091 other associated images. Using Tensorflow, a network with AlexNet architecture was trained from scratch, in which 1242 images (65%) were used for training, 382 images (20%) for validation, and 287 images (15%) for testing.

Results: The trained model was applied to a clinical testing set of an additional 3758 images (acquired between August 2007 and February 2017) independent from the training/validation/testing data set. 84 images were misclassified: 40 PA/AP classified as non-PA/AP and 44 non-PA/APs classified as PA/AP. A ROC curve, generated from the percent likelihood output by the trained model for each classified image, yielded a clinical testing AUC of 0.9972 with a standard error of 0.0005.

Conclusion: A trained convolutional neural network can classify chest radiographs with a high reported accuracy and clinical testing set AUC, thus enabling effectiveness and efficiency within the clinical workflow. The model was trained in about 4 minutes and classified 3758 images in 137 seconds. For comparison, an experienced manual sorter could sort 3758 images in about 11.6 hours.

Funding Support, Disclosures, and Conflict of Interest: Funded by NIH T32 EB002103, QIN U01CA195564, and a University of Chicago Radiomics grant. MLG is a stockholder in R2 technology/Hologic, receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi and Toshiba, and is a cofounder of and stockholder in Quantitative Insights.


Not Applicable / None Entered.


Not Applicable / None Entered.

Contact Email