MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Development of An Automatic Deep Learning Framework for the Detection of Fiducial Markers in Intrafraction Kilovoltage Images

A Mylonas1*, P J Keall1, J T Booth2, T Eade2, D T Nguyen1, (1) ACRF Image X Institute, Sydney Medical School, University of Sydney, Camperdown, New South Wales, Australia, (2) Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia

Presentations

(Sunday, 7/29/2018) 3:00 PM - 3:30 PM

Room: Exhibit Hall | Forum 6

Purpose: This work investigates a deep learning-based fiducial marker classifier to improve real-time tumour tracking in kilovoltage images using no prior patient specific data.

Methods: The proposed method involved constructing convolutional neural network (CNN) models using Rando phantom kilovoltage intrafraction images with implanted gold fiducials. To do this we trained a compact CNN (four layers with learnable weights). Additionally, we performed transfer learning using a pre-trained CNN (AlexNet, eight layers with learnable weights). Three training datasets were generated with up to 270,728 examples of fiducial markers and background images at various contrast-to-noise ratios (CNR; 2.8±0.21 -- 0.33±0.12) by augmenting 344 phantom kilovoltage images. The trained CNNs were validated using 915,407 images generated from 10,114 unseen fluoroscopic images of four prostate cancer patients with implanted fiducials undergoing radiotherapy. The accuracy of each CNN was determined using a receiver operating characteristic curve and the area under the curve (AUC) with a successful detection defined as the correct classification of a marker or background image. A real-time multiple object tracking system was developed based on the trained CNNs for intrafraction monitoring applications and was assessed using patient data.

Results: The fully trained CNN and AlexNet transferred learning CNN using the smallest training dataset with high CNR (2.8±0.21) had the lowest accuracy with AUCs of 0.9956 and 0.9865 respectively, compared to 0.9994 and 0.9993 for the largest dataset with low CNR (0.33±0.12). The fiducial markers were successfully tracked throughout all treatment fractions. The accuracy of the CNNs increased with the size of the training datasets for both training methods.

Conclusion: The high classification accuracies demonstrate that CNNs trained using phantom data with low CNR are successful for unseen patient intrafraction images. The new CNN tracking system requires no prior knowledge of the marker type, shape or size and works with overlapping markers.

Funding Support, Disclosures, and Conflict of Interest: D T Nguyen is funded by an Early Career Research Fellowship from the Australian National Health and Medical Research Council (NHMRC) and the Cancer Institute of New South Wales. P J Keall is funded by a NHMRC Senior Principle Research Fellowship.

Keywords

Image-guided Therapy, Computer Vision, Fluoroscopy

Taxonomy

IM/TH- image segmentation: X-ray

Contact Email