Room: 303
Purpose: To elaborate and investigate the benefit of integrating radiotherapy-specific imaging prior-knowledge in deep-network models for cranial pseudo-CT generation. Due to the absence of such prior-knowledge in general image processing, it has been rarely inspected in deep learning research.
Methods: 14 patients received same-day MR/CT-simulations for Cyberknife treatment, each had identical treatment and scanning positions. MR was rigid-registered to CT and two-fold cross-validation was performed. Pseudo-CT was generated from a 13-layers convolutional network, and compared against original-CT based on mean absolute HU-difference. 4 types of prior-knowledge were considered for pseudo-CT generation: Multi-echo MR sequence (ME); Quantitative/Qualitative-normalization of CT/MR (QUA); Multiple objective-function (MO); Image symmetry about imaging-isocenter (SYM). Additional pseudo-CT was generated by individually discarding each prior-knowledge (see support document), illustrating a typical deep-network performance when a particular type of prior-knowledge was unconsidered.
Results: There were in total 5 sets of pseudo-CT generated. In which, the best result obtained when all prior-knowledge was included (88.32 HU-difference), comparable to the state-of-the-art approach [1]. It significantly outperformed when one type of prior-knowledge was removed: 95.24(ME), 100.97(QUA), 111.43(MO) and 120.22(SYM).
Conclusion: The proposed network delivered accurate pseudo-CT when radiotherapy-specific imaging prior-knowledge was included. Including prior-knowledge is highly recommended. The prior-knowledge was integrated in the refined data preparation steps (ME,SYM,QUA) and objective function (MO) and imposed no restriction in network design. The aforementioned prior-knowledge could seamlessly incorporate in any network. It has a great potential to enhance various deep-network models for medical applications. An additional benefit of integrating imagine prior-knowledge is reducing data-complexity to allow simple network (our network’s parameter-count was 0.3% of [1]) effectively performing complicated tasks. It naturally avoided overfitting without utilizing any heuristic-parameter e.g. weight-regularization, drop-out, batch-size and patch-size (our model utilized no regularization, no drop-out and one batch per training-case). No heuristic-parameter tuning is required should different training data is used.