Room: AAPM ePoster Library
To automate the classification and segmentation of tumor cells in images of biopsy slides using deep learning to minimize manual labor, the time required, and human error. The segmented tumor cells and nuclei will be used for patient-specific microdosimetry studies.
A pathologist manually contoured images of 57 pathology core biopsies in TIFF format, each containing 3750x3750 pixels with a 248 nm per pixel resolution on a pixel by pixel basis. The contoured pixels were used as the ground truth for a three-dimensional deep convolutional neural network model based on a UNet architecture using Keras and Tensorflow. Forty-eight of the core images were used to train the model with data augmentation using binary cross-entropy as the loss function on a 120 GB GPU cluster for 12 hours. The remaining nine core images were used for testing. Testing was done by applying a 50% confidence threshold on the model’s prediction and comparing the results with the manual contours.
The average time for the pathologist to contour a core image was 20 minutes. The model was able to segment three images per minute with an accuracy of 90.9%, specificity of 91.2%, sensitivity of 90.0%, precision of 73.0%, and a dice coefficient of 80.6%. The model’s predictions were visually similar to the manual segmentation. The model’s predictions were more confident about the center of the tumor regions than the edges.
The proposed model can closely and consistently replicate tumor cell contours made by a pathologist 60 times faster than manual contouring. It can autonomously and efficiently generate large amounts of contoured pathology data that can be used for further research, such as microdosimetry performed on patient-specific tumor nuclei and cells. Future studies will investigate the accuracy and consistency of the manually contoured data, which was used as the ground truth.
Funding Support, Disclosures, and Conflict of Interest: This research was conducted as part of the activities of the TransMedTech Institute, thanks in part to the financial support of the Fonds de recherche du Quebec. This research was enabled in part by Compute Canada and its Niagara GPU cluster (www.computecanada.ca).