Purpose: Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET.
Methods: We developed a machine-learning-based method using a sequence of alternating random forests under the framework of an iterative refinement model. Anatomical feature selection is included in both training and predication stages to achieve optimal performance. To evaluate its accuracy, we retrospectively investigated 17 patients, each of which has been scanned by PET/CT and MR for brain. The PET images were corrected for attenuation on CT images as ground truth, as well as on pseudo CT (PCT) images generated from MR images.
Results: The PCT images showed mean average error of 66.4Â±8.1 HU, average correlation coefficient of 0.974Â±0.018 and average Dice similarity coefficient larger than 0.85 for air, bone and soft tissue. The side-by-side image comparisons and joint histograms demonstrated very good agreement of PET images after correction by PCT and CT. The mean differences of voxel values in selected VOIs were less than 4%, the mean absolute difference of all active area is around 2.5%, and the mean linear correlation coefficient is 0.989Â±0.017 between PET images corrected by CT and PCT.
Conclusion: This work demonstrates a novel learning-based approach to automatically generate CT images from routine T1-weighted MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability criteria of PET/CT, indicating high quantitative equivalence.