Room: AAPM ePoster Library
Purpose: The trigeminal nerve is a complicated neurological structure near the skull base whose normal appearance varies significantly between subjects. Segmenting the trigeminal nerve for radiosurgery treating trigeminal neuralgia is a difficult and time consuming task with pronounced operator variability. The purpose of this work is to develop and test a deep learning based model for robust automatic segmentation of the trigeminal nerve.
Methods: 1.5-Tesla preoperative T1 and T2-weighted MRI volumes were acquired from 150 patients who underwent stereotactic radiosurgery for trigeminal neuralgia. The T2-weighted image volumes were registered to the T1-weighted image volumes. The trigeminal nerves extending from the pons through the trigeminal ganglion including Meckel’s cave were contoured independently by three experts in neuroanatomy and then reviewed for consensus. A 3-D U-net convolutional neural network architecture was trained to perform automated segmentation. Dice coefficient was used to gauge the effectiveness of the model, and binary cross-entropy was used to monitor the progress of model training. 120 volumes were used to train the model, 20 to validate, and 10 to test. In total, the model was trained for a maximum of 250 epochs over a time-span of 10 hours. Early stopping checkpoints were used during training to prevent overfitting.
Results: The Dice coefficients for the training, validation, and test sets were 0.87, 0.82, and 0.81 respectively. The model produces masks with excellent visual agreement to the manually segmented volumes. Importantly, the model generated masks do not overlap with brainstem or temporal lobe tissue.
Conclusion: This work suggests that a deep-learning model could be used to perform automatic segmentation of complicated normal-appearing tissue structures near the skull base. Future work will explore the use of this model for automated treatment planning for radiosurgery and percutaneous procedures in trigeminal neuralgia.
Funding Support, Disclosures, and Conflict of Interest: This work is supported by NIH grants P41 EB015894, P30 NS076408
Segmentation, MRI, Convolution