Click here to


Are you sure ?

Yes, do it No, cancel

Effect of the Classifier Performance Criterion On Image Quality Assessment in MRI

R Gabr1*, C Williams2 , R He1 , P Narayana1 , (1) University of Texas Health Science Center at Houston, Houston, TX, (2) Bucknell University, Lewisburg, PA


(Sunday, 7/29/2018) 3:00 PM - 6:00 PM

Room: Exhibit Hall

Purpose: Assuring high image quality in magnetic resonance imaging (MRI) is crucial for confident diagnosis and performing downstream quantitative analysis, such as in radiomics/radiogenomics. Automated quality assessment is essential for handling large datasets that are typically acquired in multi-center clinical trials. Current applications of machine leaning algorithms in MRI show low sensitivity for the detection of suboptimal images. We seek to improve the performance of automated machine leaning algorithms to improve their sensitivity, which is key for successful quality assurance (QA).

Methods: As the majority of MR images tend to be of acceptable quality, automated classifiers favor calling images “acceptable�, which produces artificially high accuracy, but leads to very low sensitivity. Therefore, they are of little practical value for QA. We hypothesize that using sensitivity for the performance metric could provide favorable performance compared with the accuracy metric. To test this hypothesis, we compared sensitivity and accuracy as performance criteria for the selection of the classifier hyper-parameter and training of the popular random forest classifier, and with two sets of image quality features. Structural brain images from a multi-center brain database were used for quality assessment using a leave-one-group-out cross-validation scheme to assure generalization of the model. The average and scatter in accuracy, sensitivity, and specificity were computed.

Results: The results show that using the sensitivity as the performance criterion provided better sensitivity (16±27%) and accuracy (74+21%) compared to those when using the accuracy criterion (7±24% and 69+28%, respectively). These improved rates came at a small cost to the specificity (87+28% vs 94+22%). This trend was found insensitive to the number of image quality features used in the model.

Conclusion: Using the sensitivity performance criterion improves the detection rate of suboptimal image quality and may be a better approach for real-time QA.


Quality Control, Classifier Design, Cost Function


IM- MRI : Quality Control

Contact Email