MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Beams-Eye-View Tracking of Prostate Fiducial Markers During VMAT Treatments

A Mylonas1,2*, E Hewson1, P Keall1, J Booth3, D Nguyen1,2, (1) ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia, (2) School of Biomedical Engineering, University of Technology Sydney, Sydney, NSW, Australia, (3) Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, NSW, Australia

Presentations

(Sunday, 7/12/2020)   [Eastern Time (GMT-4)]

Room: AAPM ePoster Library

Purpose: Tracking fiducial markers using beam’s-eye-view images is ideal as it eliminates the need for additional imaging equipment and provides target information in the most important frame of reference: the view of the treatment beam. However, accurate tracking is challenging for VMAT treatments due to low contrast and MLC leaves occluding markers. Here, we present a novel beam’s-eye-view fiducial marker tracking system based on a convolutional neural network (CNN) classifier.

Methods: A real-time multiple object tracking system based on a CNN classifier was developed for intrafraction monitoring. The real-time tracking performance of the system was enabled by biasing the search region using the known 3D locations of the markers acquired from the patient’s CT. We trained the classifier using labelled MV images of prostate cancer patients with implanted fiducials undergoing VMAT treatments. The CNN was composed of four convolutional layers and one fully connected layer. The classifier was trained on images from 29 fractions of 7 patients and validated on unseen images from 78 fractions of 20 patients. The performance of the classifier was evaluated using a Precision-Recall curve. The tracking system was assessed on MV images from 15 fractions of 15 prostate cancer patients. The system accuracy was compared with manual identification.

Results: The tracking system had a mean error of -0.1±0.5mm and -0.1±0.6mm in the x-(lateral) and y-(superior/inferior) directions of the MV images, respectively. The [1st, 99th] percentiles of the error were [-1.6, 0.9]mm in the x-direction, and [-2.0, 1.3]mm in the y-direction. The classifier had a sensitivity of 98.31% and specificity of 99.87%.

Conclusion: The high classification performance on unseen MV images demonstrates that the classifier can successfully identify fiducial markers during VMAT treatments. Furthermore, the sub-millimetre accuracy and precision of the tracking system demonstrates that it can be feasibly used for real-time tracking.

Funding Support, Disclosures, and Conflict of Interest: D Nguyen is funded by an Early Career Research Fellowship from the Australian National Health and Medical Research Council (NHMRC) and the Cancer Institute of New South Wales. P Keall is funded by a NHMRC Senior Principle Research Fellowship.

Keywords

Image-guided Therapy, Segmentation

Taxonomy

IM/TH- image Segmentation: X-ray

Contact Email