Room: Stars at Night Ballroom 2-3
Purpose: We develop a novel multi-branch CNN framework which takes one slice of a patientâ€™s PET/CT images as an input and achieves state-of-the-art accuracy in predicting cancer progression.
Methods: Based on previous work, two CNNs of similar architecture were independently trained on a cohort of 300 head & neck squamous cell carcinoma patients in order to predict distant metastasis (DM), loco-regional control (LRC) and overall survival (OS). The first network (â€œCT branchâ€?) was trained solely on a single slice of the pre-treatment CT. The second network (â€œPET branchâ€?) was trained solely on a single slice of the pre-treatment PET. The training (194 patients) and validation sets (106 patients) are mutually independent and are from 4 institutions. After training, the convolutional portion of each CNN was merged into the final â€œPET/CT multi-branchâ€? CNN. A multilayer perceptron was added onto the merged convolutional output and trained, with the weights of the convolutional portions frozen. The performance of the â€œPET/CT multi-branchâ€? CNN was evaluated and compared against a benchmark study. Robustness was evaluated through a number of methods, highlighted by the usage of only a single slice of the deliberately un-registered PET/CT images.
Results: The multi-branch methodology results in AUCs of 0.91, 0.70 and 0.70 when predicting distant metastasis, loco-regional control and overall survival, respectively. We show that precise choice of image slice and the lack of spatial image registration does not significantly influence the results. This flexibility in input data is deliberate as improves the robustness of the model.
Conclusion: This study introduces a novel approach in combining information from two of the most prominent medical imaging techniques; PET and CT. By independently training each network prior to merging them, our framework achieves comparable accuracy to other methods including single modality CNNs or traditional engineered-feature radiomics.