Purpose: To achieve fast and accurate deformable image registration (DIR) in the pulmonary region by establishing a novel multi-scale framework with joint training of unsupervised deep learning models.
Methods: The multi-scale framework is composed of two models. One is for initial registration at a coarse resolution level to register gross deformations, and the other is to register the residual deformation at a fine resolution level. The coarse- and fine-registration models have the same architecture of an unsupervised convolutional neural network (CNN), which is trained to register 3D image pairs based on image similarity and deformation vector field (DVF) smoothness without the supervision of ground-truth DVF. The fine-registration model has more convolution kernels in each layer in order to register more complex detailed deformation. Both models were trained separately first, and then combined in a joint training under the multi-scale framework. The network was trained on the SPARE (Sparse-view Reconstruction Challenge for 4D-CBCT) dataset, and was evaluated using the DIR-Lab 4D-CT data with landmarks identified on inspiratory and expiratory phases.
Results: Qualitatively, the diaphragm, chest wall and lung structures were well matched. Quantitatively, the average registration error was reduced from 8.5Â±6.6 mm before DIR to 2.6Â±2.6 mm after DIR using the proposed network. Compared to a recently published multi-scale method with separate model training, our method achieved comparable mean registration error and reduced standard deviation of registration error from 4.3 mm to 2.6 mm. Our method substantially reduced the registration errors for a case with large deformations compared to the previous method. The total DVF prediction time for paired volumes of dimension 256Ã—256Ã—96 was about 1.4 seconds.
Conclusion: The proposed multi-scale network with joint training of unsupervised learning models is effective and efficient in DIR and requires no manual-tuning of parameters during prediction, making it very applicable for clinical tasks.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by NIH grant R01 CA-184173.