Room: AAPM ePoster Library
Purpose: Examining the general literatures on the organ segmentation of structural shapes in chest radiograph, it is shown that the segmented boundary is divided based on the strong image contrast.
Although the criteria to perform the segmentation will be different for each study, it is necessary to have a question on the anatomical realm of segmented organs in CADs. Therefore, in this study, the novel method for finding a real anatomical area in the chest radiograph is investigated with supervised learning-based CNN.
Methods: The 1175 synthetic X-ray images are used as training and verification data. The 'Gold Standard' label is drawn on the axial plane of the multi-detector CT (MDCT) by radiation oncologists. Based on its boundary coordinates, MDCT images are converted into 'Synthetic X-ray image'. The evaluation of segmentation performance in synthetic chest X-ray images was performed quantitatively by ‘Dice-Coefficient’, but in chest radiograph, qualitative evaluation was carried out through probability map.
Results: As a result of CNN based training, dice-values of each organ in an optimal epoch were averaged in lung (~ 96%) and heart (~ 90%). When the segmentation results of a specific patient for three years were subjected to qualitative visual evaluation, it was confirmed that the difference of the segmentation results was small, with the error rate between the data of each year being less than 3%.
Conclusion: Organ segmentation can be implemented in chest radiograph using CNN model trained with the synthetic X-ray images generated from MDCT data. Moreover, the developed prediction model could segment actual spatial region of organs from chest radiograph. It can be directly applied to the diagnosis of suspected abnormal organ diseases and also indirectly be used to increase the probability of detection of hidden pulmonary nodules by including substantial areas of the obscured lungs that could almost be missed.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the Technology development Program (grant number:S2796565) funded by the Ministry of SMEs and Startups(MSS, Korea)