In assessing Endothelial Cell Density (ECD), a critical measure of corneal health, eye bank technicians rely on manual methods that are time-consuming and potentially inconsistent, typically analyzing only 100-300 of nearly 1,000 captured endothelial cells per image. We introduce a self-supervised vision transformer model that accurately segments 100-1,263 cells and calculates ECDs, with a mean difference of 9.74% and 87% alignment with eye-bank-determined ECD. Integrated into a robust software-editor, our system offers an efficient approach to ECD analysis, presenting a significant value proposition for eye banks.
PurposeTo create Guided Correction Software for informed manual editing of automatically generated corneal endothelial cell (EC) segmentations and apply it to an active learning paradigm to analyze a diverse set of post-keratoplasty EC images.ApproachAn original U-Net model trained on 130 manually labeled post-Descemet stripping automated endothelial keratoplasty (EK) images was applied to 841 post-Descemet membrane EK images generating “uncorrected” cell border segmentations. Segmentations were then manually edited using the Guided Correction Software to create corrected labels. This dataset was split into 741 training and 100 testing EC images. U-Net and DeepLabV3+ were trained on the EC images and the corresponding uncorrected and corrected labels. Model performance was evaluated in a cell-by-cell analysis. Evaluation metrics included the number of over-segmentations, under-segmentations, correctly identified new cells, and endothelial cell density (ECD).ResultsUtilizing corrected segmentations for training U-Net and DeepLabV3+ improved their performance. The average number of over- and under-segmentations per image was reduced from 23 to 11 with the corrected training set. Predicted ECD values generated by networks trained on the corrected labels were not significantly different than the ground truth counterparts (p=0.02, paired t-test). These models also correctly segmented a larger percentage of newly identified cells. The proposed Guided Correction Software and semi-automated approach reduced the time to accurately segment EC images from 15 to 30 to 5 min, an ∼80% decrease compared to manual editing.ConclusionsGuided Correction Software can efficiently label new training data for improved deep learning performance and generalization between EC datasets.
The status of the donor tissue post-keratoplasty (post-transplant), whether full or partial thickness, is currently assessed for health, function, and complications via clinical evaluations. This includes detection of visible signs of graft rejection on slit lamp biomicroscopy such as keratic precipitates or edema. Corneal endothelial cell (EC) images are utilized to indirectly assess the health of the cornea post keratoplasty with evidence that morphometric changes may occur prior to clinical signs of rejection. We extracted over 190 novel quantitative features from EC images acquired 1-12 months prior to patients´ rejection diagnosis date, and used random forest (RF) classifiers to predict future rejection. We automatically segmented the cell borders of 171 EC images using a semi-automated segmentation approach: deep learning U-Net segmentation followed by guided manual correction. Following segmentation, we extracted novel quantitative features that robustly represented the cellular morphology from the EC images. We trained and tested a RF classifier using 5-fold cross validation and minimal Redundancy Maximal Relevance (mRMR) feature selection. From the 5-fold cross validation, we report an area under the receiver operating characteristic curve (AUC) of 0.87 ± 0.03, a sensitivity of 0.86 ± 0.12, and a specificity of 0.86 ± 0.10. The results suggest we can accurately predict a patient’s future graft rejection 1- 12 months prior to diagnosis, enabling clinicians to intervene modifying and/or instituting topical corticosteroid therapy earlier with the possibility of lowering graft rejection failures. Success of this classifier could reduce health care costs, patient discomfort, vision loss and the need for repeat keratoplasty.
We are developing automated analysis of corneal-endothelial-cell-layer, specular microscopic images so as to determine quantitative biomarkers indicative of corneal health following corneal transplantation. Especially on these images of varying quality, commercial automated image analysis systems can give inaccurate results, and manual methods are very labor intensive. We have developed a method to automatically segment endothelial cells with a process that included image flattening, U-Net deep learning, and postprocessing to create individual cell segmentations. We used 130 corneal endothelial cell images following one type of corneal transplantation (Descemet stripping automated endothelial keratoplasty) with expert-reader annotated cell borders. We obtained very good pixelwise segmentation performance (e.g., Dice coefficient = 0.87 ± 0.17, Jaccard index = 0.80 ± 0.18, across 10 folds). The automated method segmented cells left unmarked by analysts and sometimes segmented cells differently than analysts (e.g., one cell was split or two cells were merged). A clinically informative visual analysis of the held-out test set showed that 92% of cells within manually labeled regions were acceptably segmented and that, as compared to manual segmentation, automation added 21% more correctly segmented cells. We speculate that automation could reduce 15 to 30 min of manual segmentation to 3 to 5 min of manual review and editing.
Images of the endothelial cell layer of the cornea can be used to evaluate corneal health. Quantitative biomarkers extracted from these images such as cell density, coefficient of variation of cell area, and cell hexagonality are commonly used to evaluate the status of the endothelium. Currently, fully-automated endothelial image analysis systems in use often give inaccurate results, while semi-automated methods, requiring trained image analysis readers to identify cells manually, are both challenging and time-consuming. We are investigating two deep learning methods to automatically segment cells in such images. We compare the performance of two deep neural networks, namely U-Net and SegNet. To train and test the classifiers, a dataset of 130 images was collected, with expert reader annotated cell borders in each image. We applied standard training and testing techniques to evaluate pixel-wise segmentation performance, and report corresponding metrics such as the Dice and Jaccard coefficients. Visual evaluation of results showed that most pixel-wise errors in the U-Net were rather non-consequential. Results from the U-Net approach are being applied to create endothelial cell segmentations and quantify important morphological measurements for evaluating cornea health.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.