Room: Stars at Night Ballroom 2-3
Purpose: Recent studies have successfully used deep learning for computer-aided diagnosis. However, researches in deep-learning area have revealed that the output of deep neural networks (DNN) may be misled by adding relatively small perturbations to the input, leading concerns regarding robustness. In this study, we investigate the robustness of a representative problem of deep-learning based lung nodule classification.
Methods: We trained a standard DNN to classify 3D lung nodule CT images into benign or malignant class. The training used 858 lung nodule CT image volumes from The Cancer Imaging Archive following standard training steps. We evaluated the robustness of the trained DNN in three aspects. 1) We added random noise to the input images and evaluated the percentage that the output was misled. 2) We considered an adversarial attack optimization problem that purposely tried to find a noise signal to alter the DNN output. We computed the percentage that the input can be successfully attacked. 3) For a given input CT, we repeatedly solved the adversarial attack problem with different initial guesses and computed successful attack rate.
Results: The trained DNN can accurately classify the lung nodule images with an accuracy of 95% in training and 85% in testing datasets. By adding noise with an amplitude of 10HU to the input images, DNN output was altered in 1.5% of data. After solving the adversarial attack optimization problem, 12.8% of input data can be successfully attacked. The original and the attacked images are visually indistinguishable. Some data are highly vulnerable, such that the success attack rate was 100% with a small perturbation amplitude of 1HU.
Conclusion: Although DNN can be trained to accurately classify lung nodule CT images, the prediction may be vulnerable against noise in the input. Improving robustness of DNN-based prediction approaches may be needed.
Not Applicable / None Entered.
Not Applicable / None Entered.