MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

OARnet: Organs-At-Risk Delineation in Head and Neck CT Images

M H Soomro*, H Nourzadeh, V Leandro Alves, W Choi, J Siebers, University of Virginia, Charlottesville, VA

Presentations

(Monday, 7/13/2020) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Room: Track 2

Purpose: To develop a high performance knowledge-based model to automatically delineate organs-at-risk (OARs) in head and neck (H&N) image datasets used in radiation therapy treatment planning.

Methods: A new compact 3D deep learning model architecture (OARnet) is developed and used to delineate 28 H&N OARs in CT datasets. OARnet utilizes a densely connected network to detect the OAR bounding-box, then delineates the OAR within the box. It reuses information from any layer to subsequent layers, and uses skip connections to combine information from different dense block levels to progressively improve delineation accuracy. We utilize 235 publicly available H&N CTs, each with 28 OARs manually-delineated by experienced radiation oncologists; 165 datasets are used for training, and 70 for assessment. The Dice similarity coefficient (DC) and the 95th percentile Hausdorff distance (HD95) were analyzed and intercompared with values from the published UaNet method.

Results: OARnet improved DC and HD95 geometric similarity metrics in 27 out of 28 OARs. Average Dice coefficients (µ±s) of UaNet and OARnet were 0.78 ± 0.11 and 0.81 ± 0.10, respectively. Average HD95 were 5.9 ± 2.5 and 5.1 ± 2.2. In 27 OARs, the median DC improves by up to 10% and the median HD95 improves by up to 2 mm. The improvements were mainly for the hypophysis, lens_L, lense_R, optic_chiasm, optical_nerve_L, optical_nerve_R, and thyroid.

Conclusion: OARnet accurately reproduces manually delineated OARs and out-performs the previously published UaNet method. OARnet’s simple compact architecture requires few trainable parameters which makes it less prone to overfitting.

Funding Support, Disclosures, and Conflict of Interest: This work was supported by NIH R01CA222216.

Keywords

Segmentation, Radiation Therapy, Contour Extraction

Taxonomy

IM- CT: Machine learning, computer vision

Contact Email