Purpose: Enabling automated pipelines and image analysis in cancer clinics requires thorough understanding of the utilized data. Quality assurance steps for imaging data are currently performed manually, however automated approaches could improve the efficiency of these methods by verifying possible data biases. In particular, in head and neck (H&N) cancer studies, dental artifacts (DA) affect visualization of structures and the accuracy of Hounsfield units. This is problematic for image analysis, including radiomics, automated segmentation, and automated treatment planning, where poor image quality can lead to systemic biases (e.g. incorrect segmentations and assignment of tissue densities).
Methods: In this work, we present a three-dimensional convolutional neural network (CNN) to classify H&N images based on the presence of DAs. Imaging volumes of 1183 patients from public and private datasets were scored by a single observer as DA positive or negative. The CNN was trained on 576 image volumes from two datasets. All images were resized to a 96x96x96 grid to ensure uniform resolution and the model was trained until an AUC of 1.0 was reached in the training set. The finalized model was validated on 609 patient image volumes from six different public and private datasets.
Results: The macro-average AUC of the CNN was 0.86 (CI=0.84-0.88). Additionally, we determined that the CNN was sensitive to immobilization bite blocks present in one of the validation datasets; however, these devices are no longer used in regular practice and when removed from the validation dataset the macro-average AUC increased to 0.88 (CI=0.86-0.90).
Conclusion: This work demonstrates the potential to automate specific quality assurance steps vital to treatment planning and image quantification pipelines through model development with public and private datasets.
Not Applicable / None Entered.