Room: Track 1
Purpose: Transfer learning has demonstrated great utility in medical image classification, reducing the need for a large labeled dataset and the corresponding training time. However, cross-institutional variation in imaging protocols reduces the generalizability of networks between institutions even for the same task. This study evaluates the feasibility of using partial domain adaption transfer learning to reduce the dataset shift between different source datasets for OCT image classification.
Method and Materials: The example transfer network (ETN) was used to learn the shared features between the different source datasets. It consists of a feature extractor, source classifier, domain discriminator, auxiliary classifier and auxiliary domain discriminator. The feature extractor is the convolutional layers of VGG-16, pre-trained on ImageNet. The proposed method quantifies the contribution of each source example by multiplying the loss of source data by a weight factor in the loss of the source classifier and domain discriminator. The weight factor is inversely related to the results of the auxiliary domain discriminator, passed through leaky-softmax activation. In this study, we used three publicly available OCT datasets from different institutions for two transfer tasks: dataset 1- dataset 2, dataset 1- dataset 3. The three datasets have different classes, and there are shared classes between them. The dataset 1 consists of four classes including diabetic macular edema (DME), drusen, choroidal neovascularization (CNV) and normal. The classes involved in dataset 2 are DME, drusen and normal. And the dataset 3 includes DME and normal.
Results: Compared with VGG-16, ETN improved the accuracy of classification from 81.67% to 92.97% for task dataset 1 - dataset 2, from 73.30% to 96.19% for task dataset 1 - dataset 3.
Conclusion: Partial domain adaptation transfer learning can considerably improve the classification accuracy in small, but similar unlabeled datasets.
IM/TH- Image Analysis (Single Modality or Multi-Modality): Computer-aided decision support systems (detection, diagnosis, risk prediction, staging, treatment response assessment/monitoring, prognosis prediction)