Room: Karl Dean Ballroom C
Purpose: Classification models using deep learning achieve superhuman accuracies in many applications, but only after they have been trained on large amounts of labeled images. However, obtaining labeled images is very costly especially in the field of medicine. We present an initial effort to overcome this issue by using a semi-supervised generative adversarial network GAN, which uses fewer labeled images while achieving high accuracy. We successfully apply this model to the organ-labeling problem in radiation therapy.
Methods: In order to perform semi-supervised learning on a typical classifier, the GAN is modified such that, for k possible classes, GAN generator samples are added to the dataset labeling them as fake class k+1, and increasing the classifier output from k to k+1. Now, the GAN learns from labeled images, unlabeled true, and generated fake images. We applied this model to identify 29 critical organs in head and neck cancer patientsâ€™ CT images and assigned standardized labels that were recommended by AAPMTaskGroup263. We demonstrated the ability of the model to extract critical features and identify the organs using fewer labeled images. The organ masks and CT images were used to train the model.
Results: The model was tested on CT images of 218 head and neck cancer patients, and the accuracy achieved for organ identification was higher than the typical classifier. When all the labeled images were used for training the accuracies were 89.8%(GANs) and 87.1%(typical classifier). When 50% of the labeled images were used for training the accuracies were 87.6% and 83.68%, respectively. Interestingly, the 87.6% accuracy achieved by the model was higher than that of the typical classifier using 100% of the labeled images.
Conclusion: This work demonstrates the potential in using semi-supervised GANs for classification tasks in clinical research projects when generating large amounts of labeled data is very costly.
Image Processing, Feature Extraction, Convolution