Room: AAPM ePoster Library
Purpose: To improve the deformable image registration (DIR) accuracy for head and neck cancer patients by fine-tuning a pre-trained deep learning model based on patient-specific data.
Methods: The deep learning model contains two scales of convolutional neural network (CNN) networks, which are trained in an unsupervised approach by minimizing a dissimilarity intensity-based metric, while encouraging the deformed vector field (DVF) smoothness. The model is first trained using a group of patients’ data, and then fine-tuned using a specific patient’s data to build a patient-specific model for DIR. In our study, the model was trained and evaluated using public TCIA HNSCC-3DCT dataset for the head and neck patients. 66 planning CT pairs from 11 patients were used for training, and 12 CT pairs from another 2 patients were used for validation. This pre-trained model was fine-tuned based on another patient’s first two days’ CT scans, and then tested for registration using the third day’s CT. Fine-tuning was performed by only retraining the middle layers of the second scale while freezing the earlier and final layers. The accuracy of DIR was assessed both qualitatively and quantitatively in the soft tissue regions of head and neck using mutual information.
Results: Qualitatively, the fine-tuned patient specific model improved the soft tissue registration accuracy compared to the pre-trained group-based model. Quantitatively, the average mutual information across 18 soft tissue ROIs from three test cases increases from 1.09 ± 0.16 before the DIR to 1.68 ± 0.15 after the DIR for the pre-trained group model. The patient-specific model further improved the mutual information to 1.73 ± 0.18.
Conclusion: Fine-tuning a pre-trained group based deep learning model into a patient-specific model improved the performance of the DIR. The technique can be valuable for applications in adaptive radiotherapy for head and neck patients.
Funding Support, Disclosures, and Conflict of Interest: This work is supported by the National Institutes of Health under Grant No. R01-CA184173 and R01-EB028324