Room: Exhibit Hall | Forum 6
Purpose: To investigate the feasibility of using deep learning in patientsâ€™ surface images guided setup in rectal cancer radiotherapy.
Methods: A total of 180 paired CT and CBCT images from 68 rectal cancer patients were utilized in this study. The surface images were extracted from resampled CT and CBCT images, and the spatial shift and rotation were acquired from the volume registration. One-hundred-forty-four pairs of images were used to train a convolution neuron network (CNN), and 36 pairs of images were used for verification. Data augmentation was performed by adding random translations and rotations to avoid model overfitting. The results were evaluated by 5-fold cross-validation. In evaluation, the whole mean setup error, system setup error and random setup error were calculated. A simulated setup error was performed to assess the performance for a large setup error. To interpret model, a Gradient-weighted class activation mapping (Grad-CAM) was calculated to find out the important region for model reasoning.
Results: The 95% interval of setup errors in roll, pitch, lateral and longitudinal were narrowed by CNN. CNN corrected the whole mean error of all degrees of freedom by more than 50%. In addition to longitudinal, CNN can correct five degrees of freedom in system setup error. There was no larger improvement for random setup error. For large rotation error, our model could reduce roll and pitch errors less than 1.5Â° even with 10Â° setup error. However, this model still has large residual error for yaw when introducing larger error. For large translation error, all three degrees of freedom were reduced by our model. The Grad-CAM shows CNN can capture the key information for setup error calculation, and model inference is reasonable.
Conclusion: Deep learning can be used for surface registration to guide patient setup in rectal cancer radiotherapy.