Room: Exhibit Hall
Purpose: To train a conditional generative adversarial network (cGAN) to generate synthetic MR (synMR) images of the prostate from CT data and evaluate the quality of the synMR images produced using various training datasets
Methods: Seventy-seven prostate patients with both MR and CT exams were used in this study. Seventy-three patients were randomly selected as training sets, and the remaining 4 patients were used as testing sets to evaluate cGAN performance. The cGAN was trained using registered axial MR and CT slices. In order to characterize the relationship between training data and synMR quality, the following three training sets were used: (1) entire scans of 25 patients, (2) entire scans of 73 patients, (3) 73 scans limited to slices ± 1 cm around the prostate in the craniocaudal direction. This quality was evaluated by calculating the mean absolute error around the prostate, MAE(prostate±1cm), between synMR and true MR images in each testing case.
Results: In 2/4 cases, the calculated MAE¬prostate±1cm was smallest using 73 scans localized to the prostate. Although no consistent improvement was established between the two 73-patient training sets, qualitative improvements were observed in synMR images generated from the larger datasets when compared to the 25-patient training set. In one case, the discrepancy in patient position between CT and true MR lead to relatively inaccurate synMR (with respect to the true MR).
Conclusion: Machine learning can be used to generate synMR from CT. Choice of training dataset has a significant effect on synMR quality. A more sophisticated training set may yield more accurate results. Future work will train using multi-channel cGAN input data with marked anatomical structures and a more robust metric for image quality. This can lead to significant improvements in overall synMR data.