Room: Exhibit Hall | Forum 6
Purpose: To improve accuracy of the deformation-driven CBCT reconstruction by mapping on-board projections to better match with digitally-reconstructed-radiographs (DRRs) in intensity through deep learning.
Methods: Deformation-driven techniques can reconstruct new CBCT images through deforming a prior high-quality CT using deformation-vector-fields (DVFs). The DVFs are iteratively solved by intensity-matching DRRs of the deformed CT volume to acquired on-board projections. In clinical applications, however, intensity mismatches between the DRRs and on-board projections are not only from deformation, but also from degrading signals including scatter and noise, leading to reduced CBCT reconstruction accuracy. To address these degrading signals, this study proposes a deep learning approach to establish an intensity mapping scheme between registered cone-beam projections and DRRs. The scheme was subsequently applied towards new cone-beam projections to generate â€˜DRR-likeâ€™ projections with reduced degrading signals for CBCT reconstruction.The proposed scheme was evaluated using 39 liver patient cone-beam projection sets simulated from contrast-enhanced CTs by the Monte-Carlo algorithm. Among them, 29 sets were used to train a deep learning network (U-net) to learn the projection-to-DRR mapping scheme. The trained network was applied to convert the 10 remaining sets for liver CBCT reconstruction. The reconstructed CBCTs were evaluated against the â€˜gold-standardâ€™ contrast-enhanced CTs by DICE coefficient and center-of-mass-error (COME) of deformed liver tumors. A linear-fitting-based projection-to-DRR mapping scheme was also evaluated for comparison.
Results: The average tumor DICE and COME between prior and new images were 0.512 (Â±0.187) and 5.2mm (Â±2.5mm). By using 20 linearly-fitted projections for each reconstruction, the corresponding DICE and COME were 0.796 (Â±0.060) and 1.5mm (Â±0.8mm) after the reconstruction. By using 20 projections converted by the trained deep learning model, the values improved to 0.844 (Â±0.071) and 1.1mm (Â±0.9mm).
Conclusion: Deep learning better maps cone-beam projections to corresponding DRRs in intensity, which boosts the CBCT reconstruction accuracy for image-guided radiotherapy.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by grants from the American Cancer Society (RSG-13-326-01-CCE), from the US National Institutes of Health (R01 EB020366), and from the Cancer Prevention and Research Institute of Texas (RP130109).