Room: Exhibit Hall | Forum 1
Purpose: Real-time lung tumor tracking remains an important goal in radiotherapy. In-room Cone-Beam CT (CBCT) scanners are now ubiquitous in most radiotherapy departments, and are commonly used to reduce patient set-up uncertainties.CBCT projection images can be obtained while the gantry rotates during treatment. Unfortunately, the image quality of raw CBCT projections can be poor, making it challenging to visualize a lung tumor directly. We propose tracking the diaphragm as a surrogate for lung tumor position, since the dense muscle tissue of the diaphragm is easier to localize on CBCT projections.
Methods: We collected 753 CBCT projections from 11 lung cancer patients. The CBCT projections were segmented into a binary mask containing two regions to use as ground truth labels: below diaphragm and above diaphragm.We employed the U-Net convolutional neural network (CNN) architecture to train a neural network on the CBCT projections, using a single NVIDIA Quadro K2200 GPU. A total of 3 downsampling/upsampling layers were used, with 64 convolutional filters at the input to the network. We used a 75-25% split of images into training and validation sets. The network was trained with a batch size of 1 for 100,000 iterations. Images were first downscaled from a resolution of 1024x1024 to 256x256, and data was augmented using mirroring, and semi-randomized brightness and contrast scaling.
Results: The network segmented the diaphragm with a mean per-class accuracy of 77%. Image dilation post-processing was then applied to remove outlying erroneously detected regions.
Conclusion: We successfully trained a CNN to segment the diaphragm on CBCT projection images of lung cancer patients.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by an Elekta Research Grant.
Not Applicable / None Entered.
Not Applicable / None Entered.