Room: AAPM ePoster Library
Purpose: create synthetic digital reconstructed radiograph (DRRs) from MR images that allow for accurate fiducial visualization for an MR-only treatment planning of robotic radiosurgery.
Methods: developed a deep convolutional adversarial network to create synthetic CTs from pelvic MRs. Our dataset consisted of paired CT and MR images from 11 prostate cancer patients previously treated with robotic radiosurgery (CyberKnife). The MR images were general T2-weighted images for the contouring of prostate and urethra. Two model training methods were experimented: using the original MR images and using fiducial intensity modified images to simulate a recently developed fiducial-enhancing MR sequences. For each training method, two models were built, with each model trained on 9 of the available image pairs and the remaining 2 set aside for testing. These models were then used to generate synthetic CTs first, then DRRs at 45 and 315 degrees (as used for CyberKnife) and at angles 0 and 90 (as used for conventional Linac). These synthetic DRRs were compared visually, by having five observers identify the fiducial centers, and quantitatively, using mean squared loss (MSE) and peak signal-to-noise rate (PSNR), against the ground-truth DRRs to evaluate their similarity.
Results: the model was trained with the original MR images, the fiducials did not stand out on synthetic DRRs. When the model was trained with pre-proceed MR images, the DRRs generated from the synthetic and true CTs gave similar visualization of the fiducial markers in the prostate. The five observers, on average, identified the fiducial centers within 0.9 mm on the synthetic DRRs. The mean MSE between the true and synthetic DRRs was 1.2±1.5% and the mean PSNR was 22.8±5.7.
Conclusion: deep learning-based method provides a way to generate synthetic DRRs for accurate target localization for MR only planning of fiducial based robotic radiosurgery.
Target Localization, Radiosurgery, DRRs