Room: AAPM ePoster Library
Purpose: To investigate a novel real-time volumetric tumor tracking strategy using a deep learning model from a single kilovoltage (kV) X-ray images in image-guided radiation therapy (IGRT).
Methods: Here, we show that a deep-learning model was trained to map 2D projection radiographs of a patient to 3D tumor segmentation of the patient from a single projection view. To generate a sufficient number of the training 2D projection images, we introduced a different scenario of patient position and/or anatomy distribution made by synthetically changing the planning CT image. The changes, including translation, rotation, and deformation, represent vast possible clinical situations of anatomy variations during a course of radiation therapy (RT). We demonstrated the feasibility of the approach with a 4D thorax phantom in 2800 projection views. Results of the 3D tumor contours were compared between the model prediction and the reference, which was from the preset phantom. Dice similarity coefficient and center-of-mass distances were evaluated.
Results: The mean Dice similarity coefficients between proposed 3D tumor contours and reference contours for test data were 0.86 ± 0.06 (range, 0.68-0.95). The mean error of the center-of-mass distances was 1.26 mm ± 0.90 mm. The execution of the prediction model took less than 90 ms, which is well suited for real-time tracking of the 3D tumor in IGRT.
Conclusion: Real-time volumetric image guidance from a single projection view via deep learning could be useful in IGRT, and might help simplify the hardware of the on-board imaging systems.
Funding Support, Disclosures, and Conflict of Interest: This work was partially supported by NIH (1R01 CA176553 and R01CA227713), a Faculty Research Award from Google Inc
Not Applicable / None Entered.
Not Applicable / None Entered.