Room: AAPM ePoster Library
Purpose: Adaptive radiotherapy (ART) is primarily used to address inter-fractional variations but unfortunately can be overshadowed by intra-fractional changes. Recently available MRI guidance with the MR-Linac provides real-time anatomical information and allows for treatment plans to be updated based on changing anatomy. There has been successful development of dose prediction models based on anatomy, but translating a 3D dose prediction to individual field segments is necessary to define a deliverable plan. This work aims to develop a real-time ART (RT-ART) planning technique using a deep convolutional conditional generative adversarial network (DCCGAN). We investigate the prediction of 2D fluence maps from 2D dose maps with a final goal of extending the model to derive individual field segments from 3D dose distributions.
Methods: The DCCGAN architecture is built with two neural networks: a U-Net and a binary classification network. The training data were 896 paired images consisting of 2D fluence maps and 2D dose maps of individual beams from IMRT plans of three MR-Linac patients. The network was trained over 100 epochs, and a different instance of the model was saved every 10 epochs. A dataset containing 15 validation and 35 test paired images were held out during training for evaluation. The final instance of the model was selected by comparison of ground truth fluence maps to predicted fluence maps using a structural similarity score (SSIM).
Results: The DCCGAN with the best performance was generated after 30 epochs with an average SSIM score of 0.89 ± 0.05, judged by the validation dataset. This model was tested against the test dataset, and the average SSIM score was 0.90 ± 0.05. Prediction time was = 1 s.
Conclusion: The DCCGAN architecture is capable of rapidly translating dose to fluence map, indicating that it is a promising avenue to explore high-speed treatment planning for RT-ART.