Room: Karl Dean Ballroom C
Purpose: To overcome the challenges of conventional deformable image registration (DIR) and supervised deep learning DIR, by utilizing a novel unsupervised convolutional neural network (CNN) strategy for cone-beam CT (CBCT) to CT DIR.
Methods: This technique uses a deep convolutional inverse graphics network (DCIGN) based DIR algorithm implemented on 2 Nvidia 1080 Ti graphics processing units. The model was trained on a distributed learning-based convolutional neural network architecture and used 285 head and neck patients to train, validate, and test the algorithm. The accuracy of the DCIGN algorithm was validated on 100 synthetic cases and 12 hold out test patient cases. The accuracy of the DCIGN algorithm was assessed using the 95 percentile deformation vector field (DVF) error of the synthetic cases, normalized mutual information (NMI), feature similarity index metric (FSIM), and root mean squared error (RMSE) of the Canny edges between the CT and CBCT images for the hold out test patients.
Results: DCIGN achieved better 95 percentile DVF error than rigid registration, intensity corrected Demons, and landmark-guided deformable image registration for the synthetic cases. The DCIGN also performed better on the hold out test patients for the NMI, FSIM, and RMSE. DCIGN required ~14 hours to train, and ~3.5 seconds to make a prediction on a 512 x 512 x 120 voxel image.
Conclusion: DCIGN is able to maintain high accuracy in the presence of CBCT noise contamination, while simultaneously preserving high computational efficiency.
Not Applicable / None Entered.
Not Applicable / None Entered.