Purpose: To develop a custom therapy application with graphical user interface (GUI) for inter-fraction and intra-fraction patient motion verification and monitoring during intensity modulated proton therapy, using real-time fluoroscopic frames and aberration-corrected 2D contour overlays.
Methods: A Windows GUI software tool was developed in .NET/C# to passively acquire flat-panel detector (FPD) Camera Link streams from our vendor’s stereoscopic kV x-ray image-guidance platform and display the fluoroscopic images in real time. A custom phantom and a numerical calibration pipeline were built with Python/C# runtime-interop, to correct for rotational and translational errors in FPD positioning, and compute aberration-corrected 2D contour projections from a patient plan’s associated 3D structure set (RT-STRUCT). Asynchronous programming was used to achieve user interface (UI) responsiveness, automatic frame-by-frame window and level adjustment, as well as overlay of 2D contours on top of the fluoroscopic images. The DICOM exchange module (for sending plans and structures) is integrated into the treatment planning workflow with Varian Eclipse Scriping API. Collected cine video and individual frames are optionally archived on an institution-approved and HIPAA-compliant scalable storage cloud for future studies.
Results: We show that the calculated 2D coordinates of the embedded stainless steel BBs (2 mm in diameter) match their measured coordinates in live-captured fluoroscopic frames, with a mean square error of ~1.8 pixels on the FPD (or 0.017 mm at the isocenter). Contour overlay functionality is demonstrated with fluoroscopic frames simulated with a digital phantom (XCAT) and collected during a pre-clinical comparative medicine study.
Conclusion: We developed a Windows GUI application for fluoroscopic real-time image guidance during proton treatment, which has been seamlessly integrated into existing clinical workflow. It also serves as a platform for image guidance research in the future. We are implementing reliable fiducial tracking and marker-less soft tissue tracking using both conventional computer vision methods and deep-learning techniques.