Open Access
Artificial Intelligence  |   November 2023
Monitoring the Progression of Clinically Suspected Microbial Keratitis Using Convolutional Neural Networks
Author Affiliations & Notes
  • Ming-Tse Kuo
    Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City, Taiwan
    School of Medicine, Chang Gung University, Taoyuan City, Taiwan
    School of Medicine, College of Medicine, National Sun Yat-sen University, Kaohsiung City, Taiwan
  • Benny Wei-Yun Hsu
    Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
  • Yi Sheng Lin
    Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
  • Po-Chiung Fang
    Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City, Taiwan
    School of Medicine, Chang Gung University, Taoyuan City, Taiwan
    School of Medicine, College of Medicine, National Sun Yat-sen University, Kaohsiung City, Taiwan
  • Hun-Ju Yu
    Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City, Taiwan
    School of Medicine, College of Medicine, National Sun Yat-sen University, Kaohsiung City, Taiwan
  • Yu-Ting Hsiao
    Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City, Taiwan
  • Vincent S. Tseng
    Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
  • Correspondence: Ming-Tse Kuo, Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan, No. 123, Dapi Rd., Niaosong Dist., Kaohsiung City 833, Taiwan (R.O.C.). e-mail: [email protected] 
  • Vincent S. Tseng, Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, No. 1001, Daxue Rd., East Dist., Hsinchu City 300, Taiwan (R.O.C.). e-mail: [email protected] 
  • Footnotes
     MTK and BWYH contributed equally to this study.
Translational Vision Science & Technology November 2023, Vol.12, 1. doi:https://doi.org/10.1167/tvst.12.11.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ming-Tse Kuo, Benny Wei-Yun Hsu, Yi Sheng Lin, Po-Chiung Fang, Hun-Ju Yu, Yu-Ting Hsiao, Vincent S. Tseng; Monitoring the Progression of Clinically Suspected Microbial Keratitis Using Convolutional Neural Networks. Trans. Vis. Sci. Tech. 2023;12(11):1. https://doi.org/10.1167/tvst.12.11.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: For this study, we aimed to determine whether a convolutional neural network (CNN)-based method (based on a feature extractor and an identifier) can be applied to monitor the progression of keratitis while managing suspected microbial keratitis (MK).

Methods: This multicenter longitudinal cohort study included patients with suspected MK undergoing serial external eye photography at the 5 branches of Chang Gung Memorial Hospital from August 20, 2000, to August 19, 2020. Data were primarily analyzed from January 1 to March 25, 2022. The CNN-based model was evaluated via F1 score and accuracy. The area under the receiver operating characteristic curve (AUROC) was used to measure the precision-recall trade-off.

Results: The model was trained using 1456 image pairs from 468 patients. In comparing models via only training the identifier, statistically significant higher accuracy (P  < 0.05) in models via training both the identifier and feature extractor (full training) was verified, with 408 image pairs from 117 patients. The full training EfficientNet b3-based model showed 90.2% (getting better) and 82.1% (becoming worse) F1 scores, 87.3% accuracy, and 94.2% AUROC for 505 getting better and 272 becoming worse test image pairs from 452 patients.

Conclusions: A CNN-based approach via deep learning applied in suspected MK can monitor the progress/regress during treatment by comparing external eye image pairs.

Translational Relevance: The study bridges the gap between the investigation of the state-of-the-art CNN-based deep learning algorithm applied in ocular image analysis and the clinical care of suspected patients with MK.

Introduction
Microbial keratitis (MK) is a vision-threatening infection of the cornea.1 The population of MK involves individuals of all ages worldwide.26 The major pathogens include bacteria, fungi, viruses, and parasites.3 According to the preferred management of bacterial keratitis (BK) in the United States,7 the majority of community-acquired BK cases are treated with empirical therapy without smear or culture. The laboratory detection for infection sources is only recommended for patients with large central corneal infiltrates, multiple corneal infiltrates, chronic lesions with poor response to broad-spectrum antibiotics, who recently received corneal surgery, and presenting with atypical clinical features suggestive of mycobacterial, fungal, and acanthamoebic keratitis.7 
However, the features of atypical MK are easily overlooked or not presented at the initial visit,810 so many patients are treated as typical BK or herpes keratitis in the prior stage. Moreover, not all MKs can be confirmed via laboratory tests. Only about one third to two thirds of patients with suspected MK can find the pathogenic microorganism by laboratory tests.10,11 Thus, most physicians treat their patients with MK based on the ocular manifestations and adjust management strategies by observing the clinical response to prior therapy. 
To sensibly identify the clinical response to initial treatment for patients with MK, a clinician routinely takes external eye photography and compares the shape, range, and brightness of corneal infiltrates as well as other relevant signs of ocular inflammation between current and previous image records. However, eye doctors have a considerable burden to cautiously and judiciously compare images during limited outpatient hours. Moreover, another concern is that rushed subjective image judgments may lead to biased interpretations and erratic medical decisions. Furthermore, these misjudgments may be rare for an experienced corneal expert, but they can often occur in less experienced ophthalmologists. Accordingly, an unmet need is an automatic and objective image review tool for monitoring and managing MK. 
The application of convolutional neural networks (CNNs) for diagnosing, staging, and predicting eye diseases through images is intensifying. Through deep learning, CNN can identify keratitis,12 differentiate infectious and non-infectious keratitis,13 and diagnose BK and fungal keratitis (FK).1417 This technique can also differentiate active corneal ulcers from healed scars in FK.18 To explore a CNN-based approach applicable to monitor the ocular changes in patients with presumed MK, we aimed to develop and assess the model for rapidly and objectively identifying the progression of keratitis via comparing paired images from serial visits of the same episode. This developed deep learning system was expected to alert physicians of inadequate treatment or misdiagnosis for a presumed patient with MK. 
Materials and Methods
Study Design and Subjects
This multicenter longitudinal cohort study included patients with clinically suspected MK (CSMK) undergoing serial external eye photography by camera-mounted slit-lamp biomicroscope at the 5 branches of Chang Gung Memorial Hospital (CGMH) from August 20, 2000, to August 19, 2020. The slit-lamp photographs were collected if the corneal infiltrate was focused with a clear margin and was not interrupted by the reflected illumination light, and larger than 90% of the cornea was exposed without being obscured by the eyelids. The collected images were qualified by two corneal specialists (authors F.P.C. and Y.H.J.). For the protection of patient privacy, the raw pictures obtained from the CGMH database were initially stored and qualified on a personal computer with restrictions in linking to the internet. After specifying groups and removing the identification data of the qualified photographs, the models were trained and validated using these identification-delinked images. A total of 1037 subjects were enrolled. CSMK was defined as a clinical diagnosis of presumed MK made by the treating clinician based on clinical features (history and ocular examination).19,20 Patients were excluded if the initial diagnosis was autoimmune-related noninfectious keratitis, neurotrophic corneal ulcer, and corneal perforation.21 The subjects in the training and validation datasets were enrolled from Kaohsiung CGMH, whereas those in the final test dataset were from the other four branches (Keelung, Taipei, Linkou, and Chiayi). The training dataset was composed of 1456 image pairs from 468 patients (201 women [43%] and 267 men [57%]; mean [SD] age = 47.6 [20.7] years). The dataset for validation was composed of 408 image pairs from 117 subjects (58 women [49%] and 59 men [51%]; mean [SD] age = 49.0 [17.9] years). The image pairs for both the training and validation datasets were equally divided into two kinds of image pairs (the worse presentation image compared to the better presentation image and the better one compared to the worse one) without specifying the sequence of events. The dataset for final tests, restricted to the actual event sequence, consisted of 505 getting better image pairs and 272 becoming worse image pairs from 452 subjects (222 women [49%] and 230 men [51%]; mean [SD] age = 48.2 [20.7] years). All the images were shot under white light illumination without slit-beam enhancement and confirmed by the consensus of three senior corneal subspecialists. The Institutional Review Board (approval code: 202001437B0) of Chang Gung Medical Foundation permitted the investigation and obeyed the Declaration of Helsinki and the ARVO statement on human subjects. 
Image Preprocessing of Subjects’ External Eye Photographs
The procedure of image preprocessing was consistent with our previous study.15 In brief, the shooting date and identification information of patients in the photograph were automatically cut with specially designed software in batch-processing. All contribution images were resized to 224 × 224 pixels. The pixel's red, green, blue (RGB) values of each image were normalized from 0 to 1 before entering the training process of the ImageNet pretrained CNN-based model. 
Establishment of the Model for Monitoring the Progression of Ocular Surface Conditions of MK
The framework shown in Figure 1 was the newly established deep learning method, KGressDx, for identifying the progression/regression of ocular surface conditions of MK via comparing a pair of external eye photographs in this study. The training images were used to train an identifier (a multilayer perceptron) with or without freezing the pretrained weights of a feature extractor (a CNN-based algorithm), whereas the validation images were used to compare the performance of a trained model. The CNN-based feature extractors included ResNet50, SEResNet50, EfficientNet-b0, and EfficientNet-b3 algorithms. We used the pretrained version of these CNN-based models as the backbone of feature extractors for better initial weights.22 After the randomization, each CNN-based model was trained either for the identifier only (partial training) or both the identifier and the feature extractor (full training) toward the target with the greatest area under the receiver operating characteristic curve (AUROC). To generate the optimal model, we empirically tuned the hyperparameters of each model, including learning rate, weight decay, and batch size, according to the 5-fold cross-validation results. Grad-CAM++, a visual explanation approach, was used for comparing these models. The models were performed in PyTorch, and all the experiments were implemented on NVIDIA GeForce RTX 1080 GPUs. 
Figure 1.
 
The framework of the convolutional neural network-based model, KGressDx, for identifying the progression of keratitis toward amelioration or exacerbation for suspected microbial keratitis by external eye image pairs. (A) The dataset was split into two image pair sets for training and validation. After data preprocessing on input data, we adopted four CNN-based models, including ResNet50, SEResNet50, EfficientNet-b0, and EfficientNet-b3, respectively, as feature extractors and trained with a multilayer perceptron as an identifier. During the validation process, the performance on the validation dataset was used for hyperparameter tuning to optimize models. Finally, the optimized model can infer from testing data and generate Grad CAM++ results for a visual explanation. (B) The network architecture of the proposed KGressDx method. Feature vectors were extracted from the anterior image and posterior image through the feature extractor G. The inputs of the identifier I were subtractions of feature vectors.
Figure 1.
 
The framework of the convolutional neural network-based model, KGressDx, for identifying the progression of keratitis toward amelioration or exacerbation for suspected microbial keratitis by external eye image pairs. (A) The dataset was split into two image pair sets for training and validation. After data preprocessing on input data, we adopted four CNN-based models, including ResNet50, SEResNet50, EfficientNet-b0, and EfficientNet-b3, respectively, as feature extractors and trained with a multilayer perceptron as an identifier. During the validation process, the performance on the validation dataset was used for hyperparameter tuning to optimize models. Finally, the optimized model can infer from testing data and generate Grad CAM++ results for a visual explanation. (B) The network architecture of the proposed KGressDx method. Feature vectors were extracted from the anterior image and posterior image through the feature extractor G. The inputs of the identifier I were subtractions of feature vectors.
Diagnostic Validity
The validation dataset was used to compare the performance of trained models with or without simultaneous learning of CNN-based algorithms with the identifier. The testing dataset determined the average performance of better-trained models. The performance indexes, including recall (better), recall (worse), precision (better), precision (worse), F1 score (better), F1 score (worse), and accuracy, were used to evaluate the model. Recall (better) is the sensitivity to correctly identify the ordered image pair toward improvement, whereas recall (worse) is the sensitivity to recognize the image pair toward disorientation. Precision is the positive predictive value of getting better or worse identification results. F1 score, also known as the Dice similarity coefficient, is the harmonic mean of the recall and precision. 
Statistical Analysis
To further determine the statistical difference between different models, Fisher's exact test and Z score test were used to compare the performance index and AUROC pairwise, respectively. A significant difference was established at P < 0.05 and analyzed by GraphPad Prism version 9.3.1 for Windows (GraphPad Software, San Diego, CA, USA). 
Results
Comparison of the Partial and Full-Training KGressDx Models for Distinguishing the Progression of Keratitis
For all CNN-based algorithms, the partial training models showed statistically lower accuracy and AUROC than the full training models (Table 1, Fig. 2). For ResNet50-based feature extractors, the partial training model had a significantly lower recall (worse) and precision (better) than the full training model. On the contrary, for the SEResNet-based model, the former revealed a significantly lower recall (better) and precision (worse) than the latter (see Table 1). For both EfficientNet b0- and b3-based feature extractors, the full training model exhibited significantly better results in all performance indices than the partial training model (see Table 1). 
Table 1.
 
Comparison of Deep Learning Models for Distinguishing Keratitis Presentations Getting Better or Worse With the Validation Dataset
Table 1.
 
Comparison of Deep Learning Models for Distinguishing Keratitis Presentations Getting Better or Worse With the Validation Dataset
Figure 2.
 
Comparison of partial and full training KGressDx models via area under the receiver operating characteristic curve for distinguishing keratitis presentations toward improvement or disorientation with the validation dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. AUC, area under the receiver operating characteristic curve; fixed, only the multilayer perceptron identifier of the model was trained during the learning process; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 2.
 
Comparison of partial and full training KGressDx models via area under the receiver operating characteristic curve for distinguishing keratitis presentations toward improvement or disorientation with the validation dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. AUC, area under the receiver operating characteristic curve; fixed, only the multilayer perceptron identifier of the model was trained during the learning process; True positive rate, sensitivity; false positive rate, 1 – specificity.
Performance of the Full Training KGressDx Models for Identifying the Progression of Keratitis
The average performances of the 4 full training models for distinguishing the progress/regress of keratitis are shown in Table 2. The EfficientNet b3-based feature extractor had the highest average accuracy (87.3%) among the 4 models, with an average recall (better) of 85.9%, recall (worse) of 83.2%, precision (better) of 90.8%, precision (worse) of 81.1%, F1 score (better) of 90.2%, and F1 score (worse) of 82.1%. In statistics, this model had a significantly higher recall (better) and precision (worse) than the ResNet50- and SEResNet50-based feature extractors. Moreover, the EfficientNet b3-based extractor showed significantly higher F1 score (better), F1 score (worse), and accuracy than the two non-EfficientNet extractors. However, we did not find a significant difference between the two EfficientNet models. The average AUROC of the ResNet50-, SEResNet50-, EfficientNet b0-, and EfficientNet b3-based extractors was 92.3%, 91.6%, 92.7%, and 94.2%, respectively (Fig. 3). It reached a significant difference when comparing the SEResNet50- with the EfficientNet b3-based extractors (P = 0.0477). 
Table 2.
 
Performance of the KGressDx Models for Distinguishing Keratitis Presentations Getting Better or Worse Using the Test Dataset
Table 2.
 
Performance of the KGressDx Models for Distinguishing Keratitis Presentations Getting Better or Worse Using the Test Dataset
Figure 3.
 
Comparison of four full training KGressDx models via area under the receiver operating characteristic curve for identifying the progression of keratitis toward regress or progress using the test dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. C1-5 represents 5 repeated times per model. AUC, the area under the receiver operating characteristic curve; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 3.
 
Comparison of four full training KGressDx models via area under the receiver operating characteristic curve for identifying the progression of keratitis toward regress or progress using the test dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. C1-5 represents 5 repeated times per model. AUC, the area under the receiver operating characteristic curve; True positive rate, sensitivity; false positive rate, 1 – specificity.
Discrepant Analysis of the Full Training Models for Differentiating the Progression of Keratitis
For correct identification cases, we observed that the feature extractor model successfully focuses on vital inflammation signs, such as ulcers with dense infiltrates on worse images (Figs. 4A, 4B). The model focuses on a much wider area when there are fewer apparent signs in better photographs. However, the model could erratically determine the progression of keratitis toward amelioration or deterioration. When ulcers with subtle infiltrates are shown on both anterior and posterior photographs, the models might give attention to the eyelids, the flaming conjunctiva, or the tip of the cotton-tipped applicator (Figs. 4C, 4D). 
Figure 4.
 
Representative Grad CAM++ results for analyzing correct and incorrect identification cases. (A) A case of accurate identification as the amelioration of keratitis. (B) A case of accurate identification as the deterioration of keratitis. (C) A case of false detection as the improvement of keratitis. (D) A case of false detection as the exacerbation of keratitis. \(P_i^A\) and \(P_i^P\) represent the anterior and posterior image of the image pair Pi.
Figure 4.
 
Representative Grad CAM++ results for analyzing correct and incorrect identification cases. (A) A case of accurate identification as the amelioration of keratitis. (B) A case of accurate identification as the deterioration of keratitis. (C) A case of false detection as the improvement of keratitis. (D) A case of false detection as the exacerbation of keratitis. \(P_i^A\) and \(P_i^P\) represent the anterior and posterior image of the image pair Pi.
Discussion
Given that most suspected patients with MK are being treated with empirical regimens and without microbial confirmation due to typical community-acquired presentations or failed microorganism identification in the laboratory, it is critical to observe the progression of keratitis by comparing current external eye photographs with previous ones during treatment. Even for subjects with laboratory confirmation, adjusting the dosage according to the alteration of keratitis presentations on serial images is prudent. This study proposed a CNN-based KGressDx model to monitor the ocular change via serial external eye image pairs for suspected patients with MK. After full training, the EfficientNet b3-based model demonstrated good performance (94.2% AUROC, 87.3% accuracy, and 86.2% average F1 score) in identifying the progression of keratitis toward amelioration or exacerbation. The results of this study can set a new benchmark for this field. 
The importance of carefully reviewing ocular changes for suspected patients with MK must be emphasized. To make an optimal medical decision, the clinician should assess the evolution of the characteristic external eye signs, including conjunctival injection, epitheliopathy, suppurative or nonsuppurative stromal inflammation, size and marginal pattern of stromal infiltrate, the locus of the stromal inflammation, the existence of endothelial plaque, and amount of hypopyon. Serial external eye photography via slit-lamp camera twice weekly at the active ulcer stage, once weekly at the healing phase, and a finale at the completely re-epithelialized scarring phase may provide an objective review on monitoring treatment response for adjusting intervention regimens or referral to a medical center for laboratory evaluation.23,24 For effective antimicrobial therapy, the progression of keratitis is halted, where we can observe stromal infiltrate consolidation, subsiding hypopyon, re-epithelialization, and stromal scarring. In addition to slit-lamp external eye photography, anterior segment optical coherence tomography (AS-OCT) provides an alternatively objective measure of corneal infiltrate and scar size to monitor corneal changes during the treatment of suspected MK cases.25 However, considering the popularity and cost-effectiveness, external eye photography is more suitable than AS-OCT in most clinical offices. 
The CNN-based methods via external eye images showed promising performance in diagnosing MK. The accuracy for image diagnosis of BK and FK was around 70% by deep learning approach with a pure CNN architecture.14,15 The performance can be further promoted near 80% accuracy by an ensemble approach,17 hybrid learning,16 corneal segmentation before deep learning,26,27 and deep sequential feature learning.28 Recently, Tiwari et al. reported that the CNN classified corneal ulcers and scars with very high accuracy (88%), where the CNN visualizations correlated with clinically relevant features, such as corneal infiltrate, hypopyon, and conjunctival injection.18 The author finally concluded that CNN focused on clinically relevant components and was an inexpensive diagnostic approach that may aid triage in communities with limited resources.18 In this study, we further demonstrated that the CNN-based model could objectively and longitudinally assist in monitoring the clinical response toward progressing/regressing change for patients with suspected MK. 
However, the full-training KGressDx models showed better performance in identifying the progression of keratitis toward amelioration than exacerbation (see Table 2). The possible reason is that the image pairs in the training and validation datasets were not restricted to the sequence of events. According to the actual event sequence, the number of the getting better image pairs is more than those getting worse in the two datasets. Therefore, the feature detection capability of these CNN-based models to recognize getting better image pairs may be more potent than distinguishing getting worse image pairs, as revealed in the result of the test dataset in which the image pairs were restricted to the actual event sequence (see Table 2). The performance of these models could be improved by including more getting worse image pairs that follow the series of events. 
Based on the high performance of the KGressDx model in identifying the progression/regression of CSMK, we may expect a potential deep-learning system to be developed for visual outcome prediction in the future. Enzor et al. adopted a machine-learning approach to predict the prognosis of patients with Pseudomonas keratitis.29 They found that critical factors, including initial visual acuity, older age, larger infiltrate or epithelial defect at presentation, and greater maximal depth of stromal necrosis, can predict poor visual outcome. Loo et al. used a deep learning-based auto-segmentation algorithm and a machine learning approach (support vector machine) to extract biomarkers of microbial keratitis (stromal infiltrate, white blood cell infiltration, corneal edema, hypopyon, and epithelial defect).30 They found a statistically significant correlation between best-corrected visual acuity and the above biomarkers (Pearson's correlation coefficient r = 0.59–0.74) except for corneal edema (r = 0.38). However, they did not use these biomarkers to predict visual outcomes. Previous studies also show that the prognostic factors include age, risk factors (contact lens wear, trauma, and ocular surface comorbidity), causative pathogens, antimicrobial resistance, topical steroids used, systemic immunosuppression, urbanization, etc.3134 Therefore, we may speculate that a multimodal deep learning model via joining characteristic parameters (age, sex, socioeconomic information, geographic data, predisposing factors, ocular and systemic underlying, topical and systemic medications used, and laboratory data) with slit-lamp photographs will achieve a high clinical significance in predicting the visual outcome of a patient with CSMK. 
The performance of the CNN-based model may also be affected by the image quality, which depends on the technique of an external eye photographer and the compliance of the patient being photographed. Photographic brightness, focus, and booklet effect may influence image quality. Once the quality of most training images is low, it is challenging to construct high-performance models. Although uniform preprocessing methods could enhance the image quality, they may restrict the generalizability of modeling because the real-world confounding factors are hard to solve by these methods. Therefore, there still is space to promote the performance of the current KGressDx model via the continuous inclusion of high-quality image pairs obtained from a standardized photographing procedure or a future automatic slit-lamp camera. 
We should be aware that the clinical response varies depending on the pathogen responsible, duration of infection, predisposing factors, and host response.20 The clinical response to therapy may be difficult to appreciate with image assessment within the first few days owing to increased inflammation from microorganisms and host factors, the reaction to corneal scrapings, and frequent topical antibiotics. Patients may present a short-term extravasated proteinaceous response on the corneal ulcer and an intense bloodshot increment on the ocular surface after the above management, which could affect the judgment of the keratitis progression. Therefore, the physician should consider all the above factors when reading the report of the KGressDx model. We recommend comparing serial photographs at more than 3 day’s intervals to minimize the influence of these confounding factors. 
Physicians can focus on the critical regions of the pictures to speculate on the progression of keratitis. However, the input images were complicated compositions for machines to recognize, and computers collected as much information as possible from their receptive field through deep learning models to achieve accurate identification. Hence, deep learning models may learn unimportant features, such as eyelashes or cotton swabs (see Figs. 4C, 4D), after the abundant training process, mistaking them for our main target. We can consider introducing human knowledge or adopting other learning techniques to solve this issue to make deep learning models concentrate on the key regions. For instance, cropping unnecessary parts on images and training a lesion segmentation model are potential solutions for the misclassified images to be recognized correctly. Although the current KGressDx model, similar to most clinical and laboratory tests, cannot guarantee 100% accuracy in determining the change of keratitis toward improvement or deterioration, it may provide an intuitive and convenient way for a clinician to monitor the response to treatment. 
In conclusion, the KGressDx model can identify the clinical progression of keratitis toward amelioration or exacerbation. The model demonstrates an excellent generalization from the result of the testing data. This model could lessen the physician's burden because it showed potential as an objective approach to monitoring keratitis progression and therapeutic response for patients with suspected MK. Therefore, this study filled the gap between the investigation of CNN-based algorithms applied in external eye image analysis and the care of suspected patients with MK during treatment. 
Acknowledgments
The authors thank the Microbial and Virus Bank, Kaohsiung Chang Gung Memorial Hospital, for microbial collections to confirm the molecular diagnosis of microbial keratitis. The authors also thank Miss Yu-Ting Huang for the support of the researchers in the organizational procedures between the two cooperating institutions. 
Supported by the Chang Gung Research Proposal (CMRPG8M0991 and CORPG8L0021) and the Ministry of Science and Technology (MOST 109-2314-B-182A-018 -MY3). The sponsors or funding organizations had no role in the design or conduct of this research. 
Disclosure: M.-T. Kuo, None; B.W.-Y. Hsu, None; Y.S. Lin, None; P.-C. Fang, None; H.-J. Yu, None; Y.-T. Hsiao, None; V.S. Tseng, None 
References
Austin A, Lietman T, Rose-Nussbaumer J. Update on the management of infectious keratitis. Ophthalmology. 2017; 124(11): 1678–1689. [CrossRef] [PubMed]
Ting DSJ, Ho CS, Deshmukh R, Said DG, Dua HS. Infectious keratitis: an update on epidemiology, causative microorganisms, risk factors, and antimicrobial resistance. Eye (Lond). 2021; 35(4): 1084–1101. [CrossRef] [PubMed]
Ung L, Bispo PJM, Shanbhag SS, Gilmore MS, Chodosh J. The persistent dilemma of microbial keratitis: global burden, diagnosis, and antimicrobial resistance. Surv Ophthalmol. 2019; 64(3): 255–271. [CrossRef] [PubMed]
Khor WB, Prajna VN, Garg P., et al. The Asia Cornea Society Infectious Keratitis Study: a prospective multicenter study of infectious keratitis in Asia. Am J Ophthalmol. 2018; 195: 161–170. [CrossRef] [PubMed]
Lee YS, Tan HY, Yeh LK, et al. Pediatric microbial keratitis in Taiwan: clinical and microbiological profiles, 1998-2002 versus 2008-2012. Am J Ophthalmol. 2014; 157(5): 1090–1096. [CrossRef] [PubMed]
Chen CA, Hsu SL, Hsiao CH, et al. Comparison of fungal and bacterial keratitis between tropical and subtropical Taiwan: a prospective cohort study. Ann Clin Microbiol Antimicrob. 2020; 19(1): 11. [CrossRef] [PubMed]
Lin A, Rhee MK, Akpek EK, et al. Bacterial keratitis preferred practice pattern. Ophthalmology. 2019; 126(1): 1–55. [CrossRef]
Ong HS, Sharma N, Phee LM, Mehta JS. Atypical microbial keratitis. Ocul Surf. 2023; 28: 424–439. [CrossRef] [PubMed]
Shah YS, Stroh IG, Zafar S, et al. Delayed diagnoses of Acanthamoeba keratitis at a tertiary care medical centre. Acta Ophthalmol. 2021; 99(8): 916–921. [CrossRef] [PubMed]
Gopinathan U, Sharma S, Garg P, Rao GN. Review of epidemiological features, microbiological diagnosis and treatment outcome of microbial keratitis: experience of over a decade. Indian J Ophthalmol. 2009; 57(4): 273–279. [PubMed]
Tuft S, Bunce C, De S, Thomas J. Utility of investigation for suspected microbial keratitis: a diagnostic accuracy study. Eye (Lond). 2023; 37(3): 415–420. [CrossRef] [PubMed]
Li Z, Jiang J, Chen K, et al. Preventing corneal blindness caused by keratitis using artificial intelligence. Nat Commun. 2021; 12(1): 3738. [CrossRef] [PubMed]
Gu H, Guo Y, Gu L, et al. Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs. Sci Rep. 2020; 10(1): 17851. [CrossRef] [PubMed]
Kuo MT, Hsu BW, Lin YS, et al. Comparisons of deep learning algorithms for diagnosing bacterial keratitis via external eye photographs. Sci Rep. 2021; 11(1): 24227. [CrossRef] [PubMed]
Kuo MT, Hsu BW, Yin YK, et al. A deep learning approach in diagnosing fungal keratitis based on corneal photographs. Sci Rep. 2020; 10(1): 14424. [CrossRef] [PubMed]
Koyama A, Miyazaki D, Nakagawa Y, et al. Determination of probability of causative pathogen in infectious keratitis using deep learning algorithm of slit-lamp images. Sci Rep. 2021; 11(1): 22642. [CrossRef] [PubMed]
Ghosh AK, Thammasudjarit R, Jongkhajornpong P, Attia J, Thakkinstian A. Deep learning for discrimination between fungal keratitis and bacterial keratitis: DeepKeratitis. Cornea. 2022; 41(5): 616–622. [CrossRef] [PubMed]
Tiwari M, Piech C, Baitemirova M, et al. Differentiation of active corneal infections from healed scars using deep learning. Ophthalmology. 2022; 129(2): 139–146. [CrossRef] [PubMed]
Cabrera-Aguas M, Khoo P, George CRR, Lahra MM, Watson SL. Predisposing factors, microbiological features and outcomes of patients with clinical presumed concomitant microbial and herpes simplex keratitis. Eye (Lond). 2022; 36(1): 86–94. [CrossRef] [PubMed]
Moussa G, Hodson J, Gooch N, et al. Calculating the economic burden of presumed microbial keratitis admissions at a tertiary referral centre in the UK. Eye (Lond). 2021; 35(8): 2146–2154. [CrossRef] [PubMed]
Cabrera-Aguas M, Khoo P, Watson SL. Presumed microbial keratitis cases resulting in evisceration and enucleation in Sydney, Australia. Ocul Immunol Inflamm. 2023; 31(1): 224–230. [CrossRef] [PubMed]
Poojary R, Pai A. Comparative study of model optimization techniques in fine-tuned CNN models. 2019 International Conference on Electrical and Computing Technologies and Applications. 2019;1–4.
O'Brien TP. Management of bacterial keratitis: beyond exorcism towards consideration of organism and host factors. Eye (Lond). 2003; 17(8): 957–974. [CrossRef] [PubMed]
Allan BD, Dart JK. Strategies for the management of microbial keratitis. Br J Ophthalmol. 1995; 79(8): 777–86. [CrossRef] [PubMed]
Abdelghany AA, D'Oria F, Alio Del Barrio J, Alio JL. The value of anterior segment optical coherence tomography in different types of corneal infections: an update. J Clin Med. 2021; 10(13): 2841. [CrossRef] [PubMed]
Hung N, Shih AK, Lin C, et al. Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: model development and validation with different convolutional neural networks. Diagnostics (Basel). 2021; 11(7): 1246. [CrossRef] [PubMed]
Mayya V, Kamath Shevgoor S, Kulkarni U, Hazarika M, Barua PD, Acharya UR. Multi-scale convolutional neural network for accurate corneal segmentation in early detection of fungal keratitis. J Fungi (Basel). 2021; 7(10): 850. [CrossRef] [PubMed]
Xu Y, Kong M, Xie W, et al. Deep sequential feature learning in clinical image classification of infectious keratitis. Engineering. 2021; 7(7): 1002–1010. [CrossRef]
Enzor R, Bowers EMR, Perzia B, et al. Comparison of clinical features and treatment outcomes of Pseudomonas aeruginosa keratitis in contact lens and non-contact lens wearers. Am J Ophthalmol. 2021; 227: 1–11. [CrossRef] [PubMed]
Loo J, Woodward MA, Prajna V, et al. Open-source automatic biomarker measurement on slit-lamp photography to estimate visual acuity in microbial keratitis. Transl Vis Sci Technol. 2021; 10(12): 2. [CrossRef] [PubMed]
Stapleton F. The epidemiology of infectious keratitis. Ocul Surf. 2023; 28: 351–363. [CrossRef] [PubMed]
Hsu HY, Ernst B, Schmidt EJ, Parihar R, Horwood C, Edelstein SL. Laboratory results, epidemiologic features, and outcome analyses of microbial keratitis: a 15-year review from St. Louis. Am J Ophthalmol. 2019; 198: 54–62. [CrossRef] [PubMed]
Ting DSJ, Cairns J, Gopal BP, et al. Risk factors, clinical outcomes, and prognostic factors of bacterial keratitis: the Nottingham Infectious Keratitis Study. Front Med (Lausanne). 2021; 8: 715118. [CrossRef] [PubMed]
Prajna NV, Krishnan T, Mascarenhas J, et al. Predictors of outcome in fungal keratitis. Eye (Lond). 2012; 26(9): 1226–1231. [CrossRef] [PubMed]
Figure 1.
 
The framework of the convolutional neural network-based model, KGressDx, for identifying the progression of keratitis toward amelioration or exacerbation for suspected microbial keratitis by external eye image pairs. (A) The dataset was split into two image pair sets for training and validation. After data preprocessing on input data, we adopted four CNN-based models, including ResNet50, SEResNet50, EfficientNet-b0, and EfficientNet-b3, respectively, as feature extractors and trained with a multilayer perceptron as an identifier. During the validation process, the performance on the validation dataset was used for hyperparameter tuning to optimize models. Finally, the optimized model can infer from testing data and generate Grad CAM++ results for a visual explanation. (B) The network architecture of the proposed KGressDx method. Feature vectors were extracted from the anterior image and posterior image through the feature extractor G. The inputs of the identifier I were subtractions of feature vectors.
Figure 1.
 
The framework of the convolutional neural network-based model, KGressDx, for identifying the progression of keratitis toward amelioration or exacerbation for suspected microbial keratitis by external eye image pairs. (A) The dataset was split into two image pair sets for training and validation. After data preprocessing on input data, we adopted four CNN-based models, including ResNet50, SEResNet50, EfficientNet-b0, and EfficientNet-b3, respectively, as feature extractors and trained with a multilayer perceptron as an identifier. During the validation process, the performance on the validation dataset was used for hyperparameter tuning to optimize models. Finally, the optimized model can infer from testing data and generate Grad CAM++ results for a visual explanation. (B) The network architecture of the proposed KGressDx method. Feature vectors were extracted from the anterior image and posterior image through the feature extractor G. The inputs of the identifier I were subtractions of feature vectors.
Figure 2.
 
Comparison of partial and full training KGressDx models via area under the receiver operating characteristic curve for distinguishing keratitis presentations toward improvement or disorientation with the validation dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. AUC, area under the receiver operating characteristic curve; fixed, only the multilayer perceptron identifier of the model was trained during the learning process; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 2.
 
Comparison of partial and full training KGressDx models via area under the receiver operating characteristic curve for distinguishing keratitis presentations toward improvement or disorientation with the validation dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. AUC, area under the receiver operating characteristic curve; fixed, only the multilayer perceptron identifier of the model was trained during the learning process; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 3.
 
Comparison of four full training KGressDx models via area under the receiver operating characteristic curve for identifying the progression of keratitis toward regress or progress using the test dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. C1-5 represents 5 repeated times per model. AUC, the area under the receiver operating characteristic curve; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 3.
 
Comparison of four full training KGressDx models via area under the receiver operating characteristic curve for identifying the progression of keratitis toward regress or progress using the test dataset. (A) ResNet50-based feature extractor, (B) SEResNet50-based feature extractor, (C) EfficientNet b0-based feature extractor, and (D) EfficientNet b3-based feature extractor. C1-5 represents 5 repeated times per model. AUC, the area under the receiver operating characteristic curve; True positive rate, sensitivity; false positive rate, 1 – specificity.
Figure 4.
 
Representative Grad CAM++ results for analyzing correct and incorrect identification cases. (A) A case of accurate identification as the amelioration of keratitis. (B) A case of accurate identification as the deterioration of keratitis. (C) A case of false detection as the improvement of keratitis. (D) A case of false detection as the exacerbation of keratitis. \(P_i^A\) and \(P_i^P\) represent the anterior and posterior image of the image pair Pi.
Figure 4.
 
Representative Grad CAM++ results for analyzing correct and incorrect identification cases. (A) A case of accurate identification as the amelioration of keratitis. (B) A case of accurate identification as the deterioration of keratitis. (C) A case of false detection as the improvement of keratitis. (D) A case of false detection as the exacerbation of keratitis. \(P_i^A\) and \(P_i^P\) represent the anterior and posterior image of the image pair Pi.
Table 1.
 
Comparison of Deep Learning Models for Distinguishing Keratitis Presentations Getting Better or Worse With the Validation Dataset
Table 1.
 
Comparison of Deep Learning Models for Distinguishing Keratitis Presentations Getting Better or Worse With the Validation Dataset
Table 2.
 
Performance of the KGressDx Models for Distinguishing Keratitis Presentations Getting Better or Worse Using the Test Dataset
Table 2.
 
Performance of the KGressDx Models for Distinguishing Keratitis Presentations Getting Better or Worse Using the Test Dataset
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×