Room: AAPM ePoster Library
Purpose: automatic treatment plan evaluation can help accelerate the plan optimization and approval process, especially for adaptive treatment re-planning. Plan quality quantification is subjective to individual physicians’ preference and can hardly be done with a simple algorithm such as PlanIQ. Here we propose a deep learning model that can rank treatment plans for a patient based on physician’s preference.
Methods: IRB approval, 630 treatment plans from 58 head and neck cases treated by one physician were retrospectively selected for this project. The model performs pairwise comparison between two plans for a particular patient. The input of the model is the dose and contours for the PTV and OARs and the output is that 1) the first plan is better; 2) the second plan is better; or 3) two plans are similar. The model is first trained using the PlanIQ scores for all the plans in training dataset so it works for an idealized virtual physician called “Dr. PlanIQ”. Then the trained model is adapted to the real physician’s preference through transfer learning. We randomly picked 50 cases for training, 4 cases for validation, and 4 for testing.
Results: trained model for “Dr. PlanIQ” can achieve 0.68 binary accuracy on the testing data. To evaluate the transfer learning feasibility, we have compared the similarity between the Plan-IQ scoring and physician’s preference. We calculated the approved plan Plan-IQ ranking among all the plans for each patient, and the median ranking for approved plan is 83.3%. This shows that the score and physician’s opinion are mostly consistent, indicating the transfer learning can possibly achieve high accuracy in this task.
Conclusion: proposed a deep learning model to evaluate treatment plans based on physician’s preference automatically, which has the potential to improve planning efficiency especially for adaptive therapy re-planning.
Not Applicable / None Entered.