Room: Exhibit Hall | Forum 2
Purpose: The intrinsic complex nature of MR images and variability in the data acquisition process make it difficult to develop a robust MR-to-CT conversion procedure. The study explores a novel method based on deep neural networks (DNN) to simulate the CT image space by relying on several types of MR image data.
Methods: The neural net and an automated image data flow based on TensorFlow/Python were designed to provide a supervised learning environment for sCT. The DNN architecture relies on a generator and a discriminator to synthesize and classify the output 3D image space. The generator has a U-net structure and the discriminator operates as a cost function and features a PatchGAN classifier. The two subcomponents were trained and worked in unison to render a meaningful image feature database and optimize the sCT. The loss function was derived during the unmanned learning process. The sCT environment was implemented on a NVIDIA Tesla Xp GPU unit. The net was trained/validated using CT and MR (T1w/T2w from three scanners) data from 40 randomly identified patients diagnosed with oropharyngeal squamous cell carcinoma. The data was acquired with patients immobilized in masks, and subsequently curated to ensure adequate MR-to-CT volumetric correspondence (registration/resampling). sCT validation against CT was considered via anatomical feature matching and RT plan dosimetry (gamma map, target/OARs criteria).
Results: The GPU implementation achieved a 10X acceleration factor over CPU. This allowed rapid model training and the generation of full 3D-sCT datasets under 1 min. The model validation was first done using a multimodality anthropomorphic pelvis phantom - anatomical feature differences were within the pixel size and dose differences within 1%. The H&N models were built using MR-T1w/T2w/T1w+T2w data. The overall dosimetry deviations are within 1-2%.
Conclusion: A DNN architecture can be used to generate on demand robust sCT data from MR images.
Not Applicable / None Entered.