Open Access
Articles  |   April 2022
Assessment of Anterior Uveitis Through Anterior-Segment Optical Coherence Tomography and Artificial Intelligence-Based Image Analyses
Author Affiliations & Notes
  • Martin Arman Sorkhabi
    Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark
  • Ivan O. Potapenko
    Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark
    Department of Clinical Medicine. University of Copenhagen, Copenhagen, Denmark
  • Tomas Ilginis
    Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark
  • Mark Alberti
    Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark
  • Javier Cabrerizo
    Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark
    Department of Clinical Medicine. University of Copenhagen, Copenhagen, Denmark
    Copenhagen Eye Foundation, Copenhagen, Denmark
  • Correspondence: Martin Arman Sorkhabi, Department of Ophthalmology, Rigshospitalet, Glostrup, University of Copenhagen, Copenhagen, Denmark. e-mail: [email protected] 
  • Footnotes
    *  MAS and IOP contributed equally to this work.
Translational Vision Science & Technology April 2022, Vol.11, 7. doi:https://doi.org/10.1167/tvst.11.4.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Martin Arman Sorkhabi, Ivan O. Potapenko, Tomas Ilginis, Mark Alberti, Javier Cabrerizo; Assessment of Anterior Uveitis Through Anterior-Segment Optical Coherence Tomography and Artificial Intelligence-Based Image Analyses. Trans. Vis. Sci. Tech. 2022;11(4):7. https://doi.org/10.1167/tvst.11.4.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to develop an automated artificial intelligence (AI) based method to quantify inflammation in the anterior chamber (AC) using anterior-segment optical coherence tomography (AS-OCT) and to explore the correlation between AI assisted AS-OCT based inflammation analyses and clinical grading of anterior uveitis by Standardization of Uveitis Nomenclature (SUN).

Methods: A prospective double blinded study of AS-OCT images of 32 eyes of 19 patients acquired by Tomey CASIA-II. OCT images were analyzed with proprietary AI-based software. Anatomic boundaries of the AC were segmented automatically by the AI software and Spearman's rank correlation between parameters related to AC cellular inflammation were calculated.

Results: No significant (p = 0.6602) differences were found between the analyzed AC areas between samples of the different SUN grading, suggesting accurate and unbiased border detection/AC segmentation. Segmented AC areas were processed by the AI software and particles within the borders of AC were automatically counted by the software. Statistical analysis found significant (p < 0.001) correlation between clinical SUN grading and AI software detected particle count (Spearman ρ = 0.7077) and particle density (Spearman ρ = 0.7035). Significant (p < 0.001) correlation (Pearson's r = 0.9948) between manually and AI detected particles was found. No significant (p = 0.8080) difference was found between the sizes of the AI detected particles for all studies.

Conclusions: AI-based image analysis of AS-OCT slides show significant and independent correlation with clinical SUN assessment.

Translational Relevance: Automated AI-based AS-OCT image analysis suggests a noninvasive and quantitative assessment of AC inflammation with clear potential application in early detection and management of anterior uveitis.

Introduction
Anterior uveitis can be present in ocular infections, autoimmune conditions and after intraocular surgery and is a common condition in the ophthalmological practice.1 If chronic or untreated, it can trigger ocular tissue remodeling processes that can lead to permanent vision loss.2,3 Most uveitis-related complications can be managed or completely prevented if inflammation is detected and diagnosed early, enabling a prompt treatment.4,5 Current medical management of noninfectious uveitis includes nonsteroidal anti-inflammatory drugs (NSAIDs), biologic agents, and corticosteroids, the latter being the treatment of choice to control postoperative inflammation in a wide spectrum of different pathologies. Common steroid-related ocular side effects are: elevated intraocular pressure (IOP), increased risk of infection, impairment of scleral or corneal wound healing, and lens opacification.68 Accordingly, an efficient and accurate use of topical anti-inflammatory medication is of major importance for the management of these patients. 
Standardized numerical grading of cells or flare observed by the ophthalmologist during slit-lamp examination by Standardization of Uveitis Nomenclature (SUN) grading is the current gold standard method to assess the severity and determine the medical management in anterior uveitis.9 Although slit-lamp examination has been proven to provide crucial information in these conditions, it presents a set of limitations.10 First, slit-lamp examination is subjective and is affected by intra- and interobserver variability11,12 (interobserver kappa range = 0.34 to 0.43) and thus does not offer a standardized and fully comparable, assessment of the level of inflammation.13 Second, the ability to accurately count particles have been shown to be subjected to instrument settings (i.e. level of illumination and width of the slit-beam) parameters.14 Third, early detection and diagnosis might be difficult when the inflammatory cell count is low. Thus, subclinical levels of inflammation might be overseen during slit lamp examination, postponing the onset of anti-inflammatory therapy. 
These implications manifest the need for a standardized and numerical method to detect and grade ocular inflammation, with the ability to detect early stages of the condition. 
In this study, we present an image analysis software (AEye Image Analyzer) capable of quantifying and categorizing intraocular inflammation in the anterior segment of the eye by anterior-segment optical coherence tomography (AS-OCT) image analyses. We assess the model's accuracy compared with the current gold standard method in the clinical practice, slit lamp-based SUN grading. 
Materials and Methods
We present a prospective double blinded study to determine the utility of an AS-OCT image-based artificial intelligence (AI) software to assess the activity of anterior uveitis. Informed consent was given by the patients and the control group. Ethical and data management consent for the study was granted by the responsible regulatory committees (the National Committee on Health Research Ethics, Region Hovedstadens Ethical Committee). 
SUN Grading
Slit-lamp-based SUN grading by an experienced ophthalmologist was carried out as part of a standard exploration. The clinical degree of anterior chamber (AC) SUN cells SUN(cells) and AC SUN flare was reported in accordance with the SUN grading guidelines.9 
AI Image Analysis
A proprietary AI based software (Anterior Image OCT Analyzer [AI-OCT] version 1.0) developed internally by the research group was used for segmentation of anterior chamber structures and particle detection. The software outputs segmented area of AC in pixels (pxl), observed particles (ptl), and particle density (ptl/pxl · 106). 
Segmentation of the anterior chamber area was performed by a deep learning segmentation algorithm with a custom UNet [1] architecture trained on a database of 844 manually segmented AS-OCT scans. The dataset was split into training, test, and validation subsets (80%, 10%, and 10%, respectively). The final model used input size of 256 × 256 pixels, binary cross entropy as loss function, the ADAM optimizer with a learning rate of 0.001. Data augmentation was implemented using rotation, horizontal, and vertical stretching, horizontal flipping, and zoom. Early stopping was implemented using the test subset. 
In addition, a separate deep learning particle segmentation algorithm was trained on a database of 1330 manually segmented AS-OCT scans. On each scan, the area deemed to represent each particle was marked. To avoid averaging out of the small particles, instead of scaling, frames of 256 × 256 were extracted from each OCT scan at original resolution using stride of 0.5. Only frames with marked particles were used, yielding a dataset of 6006 individual frames. These were split scan-wise into training, test, and validation subsets (80%, 10%, and 10%, respectively). 
A deep learning segmentation model with a custom UNet architecture was developed and trained using binary cross entropy as the loss function, the ADAM optimizer with a learning rate of 0.0001. Data augmentation was implemented using flipping only. Early stopping was implemented using the test subset. 
All deep learning models were trained with Tensorflow version 2.2.0, Keras version 2.3.0, and Python version 3.8.5 running on an Ubuntu 20.04 machine equipped with 16 GB RAM and an NVidia GTX 1070 with 8 GB of memory. 
OCT Imaging
Single scan AS-OCT imaging was performed using CASIA2 (Tomey Corporation, Nagoya, Japan) within the following 20 minutes after SUN assessment. The images used for analyses were not averaged, the scan range was 12 mm, and the highest definition setting of 2000 A-scans per lines sampling. 
Manual Particle Segmentation
Manual particle segmentation, was performed on the single scan OCT images by an experienced CASIA2 OCT user. 
Statistical Analyses
Correlation between clinical SUN grading score and AS-OCT cell count was measured by Spearman's rank correlation. Correlation between automated AI based and manual particle segmentation count was measured by Pearson's correlation. The Kolmogorov-Smirnov method was used to test for sample normality. Statistical analysis was performed using GraphPad Prism version 9.0.1 for Mac (GraphPad Software, San Diego, CA, USA, www.graphpad.com). 
Results
Demographics
There were 32 eyes of 19 patients (9 women and 10 men) with a diagnosis of anterior uveitis in at least one eye that were selected between 2018 and 2019 at Rigshospitalet, Glostrup, Denmark. Patients’ ages averaged 51 ± 4 (mean ± standard error of mean) years. A total of 10 eyes from 5 patient control subjects with no history of inflammatory ocular conditions, and with no previous ocular surgery (3 women and 2 men) were enrolled. The average age of the control group was 33 ± 2 (mean ± standard error of mean) years. 
SUN Grading
In the patient cohort group, 16 eyes were assigned an SUN grading of 0, 6 eyes had a grade of 0.5+, 3 eyes had a grade of 1+, 4 eyes with 2+, and 3 eyes with 3+. In the control group, all eyes had an SUN grade of 0 (Table; clinical evaluation). 
Table.
 
Clinical and AI Evaluation of Patient Cohort
Table.
 
Clinical and AI Evaluation of Patient Cohort
Automatic Segmentation of the Anterior Chamber
The program accurately outlines the limits of the anterior chamber in all studied images, superior: the inner cornea curvature, inferior: the iris and the lens plane, and sides: the angle structures (Fig. 1). All analyzed scans were individually reviewed and no abnormal AC architectures were observed. ANOVA showed no significant (p = 0.6602) difference between size of segmented area between different AC SUN cell SUN(cells) groupings, giving evidence of nonbiased segmentation by the AI software (Fig. 2). 
Figure 1.
 
Segmentation of AC in OCT image by AI (magenta box: magnified inserts). – Upper left: AS-OCT image of control eye with SUN(cells) 0 score. Upper right: AS-OCT image of eye with SUN(cells) 3 score. Lower left: AI-segmented (red line) area of AC in control eye with SUN(cells) 0 score. Lower right: AI-segmented (red line) area of AC and AI-detected particles (green dots) in image of eye with SUN(cells) 3 score.
Figure 1.
 
Segmentation of AC in OCT image by AI (magenta box: magnified inserts). – Upper left: AS-OCT image of control eye with SUN(cells) 0 score. Upper right: AS-OCT image of eye with SUN(cells) 3 score. Lower left: AI-segmented (red line) area of AC in control eye with SUN(cells) 0 score. Lower right: AI-segmented (red line) area of AC and AI-detected particles (green dots) in image of eye with SUN(cells) 3 score.
Figure 2.
 
Segmented pixel area – AC area segmented by AI. segmented AC area by AI. Gray – Control cohort 0 SUN(cells)), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 2.
 
Segmented pixel area – AC area segmented by AI. segmented AC area by AI. Gray – Control cohort 0 SUN(cells)), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Image Analyses
We evaluated the performance of the cell counting AI software using a validation subset of 133 images. The model's performance was evaluated in the validation subset using Jaccard (0.52) and Dice (0.40) coefficients. The model's particle count performance was as well evaluated in the validation subset. A flood filling algorithm was used to locate each particle in both the manually segmented ground truth images, and the corresponding AI-based segmented images. Specificity (\(\frac{{True\;Positive}}{{True\;Positive\; + \;False\;Positive}}\)) and sensitivity (\(\frac{{True\;Positive}}{{True\;Positive\; + \;False\;Negative}})\) rates to detect particles were scored as 0.68 and 0.78, respectively. Spearman correlation (p = 0.77 and p < 0.0001) coefficient was then used to establish correlation between the particle counts in the validation subset. 
The average number of particles (ptl) observed by the AI software in the patient cohort group were 1.92 ptl ± 0.46 ptl (mean ± standard error of mean) for eyes of SUN(cells) grading 0, 4.00 ptl ± 1.44 ptl for SUN (cells) grading 0.5+, 24.33 ptl ± 7.86 ptl for SUN(cells) grading 1+, 22.75 ptl ± 6.86 ptl for SUN(cells) grading 2+, and 117.33 ptl ± 20.51 ptl for SUN(cells) grading 3+ (TableFig. 3A). Furthermore, the mean density of particles per million pixel (ptl/pxl · 106) was 4.55 ± 1.09 for eyes of SUN(cells) grading 0, 8.60 ± 2.81 for SUN(cells) grading 0.5+, 54.53 ± 21.13 for SUN(cells) grading 1+, 45.35 ± 11.02 for SUN(cells) grading 2+, and 284.90 ± 50.94 for SUN(cells) grading 3+ (see the TableFig. 3B). 
Figure 3.
 
AI evaluation of patient cohort. – (A) Observed number of particles by AI in segmented AC area. (B) Particle density (ptl/pxl•106) in AC calculated by the AI. (C) Correlation between clinical SUN(cells) grading and number of particles observed by the AI and non-linear exponential fit. (D) Correlation between clinical SUN(cells) grading and particle density and non-linear exponential fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 3.
 
AI evaluation of patient cohort. – (A) Observed number of particles by AI in segmented AC area. (B) Particle density (ptl/pxl•106) in AC calculated by the AI. (C) Correlation between clinical SUN(cells) grading and number of particles observed by the AI and non-linear exponential fit. (D) Correlation between clinical SUN(cells) grading and particle density and non-linear exponential fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Spearman's rank correlation analysis showed a significant (p < 0.0001) correlation (ρ = 0.7077) between the number of observed particles in AC by AI software analysis and AC SUN(cells) grading of cells (Fig. 3C). Similarly, a significant (p < 0.0001) correlation (ρ = 0.7035) existed between clinical SUN(cells) grading score and AS-OCT image based particle density (Fig. 3D). 
To identify any potential SUN(cells) biases in particle detection, we examined the average observed particle size between all groups (including controls) where the software detected particles. ANOVA of the average observed particle sizes showed no significant difference (p = 0.8080) between the average observed particle sizes between different SUN(cells) groupings and control eyes, illustrating that the program does not show SUN(cells)-graded-dependency for particle detection (Fig. 4). 
Figure 4.
 
Average particle size observed in AC by AI. Average particle size observed for each eye. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 4.
 
Average particle size observed in AC by AI. Average particle size observed for each eye. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Manual Segmentation
Manual particle count was also performed on image scans (Fig. 5). The average number of particles (ptl) discovered were 0.33 ptl ± 0.19 ptl (mean ± standard error of mean) for eyes of SUN(cells) grading 0, 2.5 ptl ± 0.99 ptl for SUN(cells) grading 0.5+, 24 ptl ± 11 ptl for SUN(cells) grading 1+, 28 ptl ± 9.4 ptl for SUN(cells) grading 2+, and 128 ptl ± 26 ptl for SUN(cells) grading 3+ (Fig. 5A). Spearman's rank correlation analysis showed a significant (p < 0.0001) correlation (ρ = 0.8264) between the number of manually segmented particles in AC and AC SUN(cells) (Fig. 5B). Furthermore, we measured Pearson's (linear) correlation between manual and automatic AI-based segmentation and computed a linear regression model between the two variables. Spearman's rank correlation analysis showed a significant (p < 0.0001) correlation (ρ = 0.9948) between number of observed particles in AC by the AI software analysis and manual segmentation (Fig. 5C). Goodness of fit (R2) for the linear regression fit was evaluated as 0.9897. 
Figure 5.
 
Manual evaluation of patient cohort. – (A) The number of manually segmented particles in AC. (B) Correlation between clinical SUN(cells) grading and number of manually segmented particles and nonlinear exponential fit. (C) Correlation between manually segmented particles and particles detected by the AI particle detection software and linear regression fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars denote mean of grading.
Figure 5.
 
Manual evaluation of patient cohort. – (A) The number of manually segmented particles in AC. (B) Correlation between clinical SUN(cells) grading and number of manually segmented particles and nonlinear exponential fit. (C) Correlation between manually segmented particles and particles detected by the AI particle detection software and linear regression fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars denote mean of grading.
Discussion
Slit-lamp-based SUN grading aims to provide a standardized assessment of anterior uveitis. Although this method has widely demonstrated its utility in the clinical practice it is subject to biases and shortcomings. SUN grading is operator specific and its reliability and validity rely on observer experience. Furthermore, other technical limitations like slit lamp resolution threshold, beam size calibration, light source, and patient collaboration may jeopardize the quality of its findings. 
As consistency, accuracy, and reliability are key criteria for standardized results, a computational, image-based approach to the categorization of anterior uveitis could entail clear benefits. In our study, our image analysis software was able to demonstrate significant correlation between the number of particles detected by AS-OCT imaging and clinical slit-lamp-based SUN grading while providing useful continuous and numerical information (see Fig. 3) and supporting its potential as a novel application for assessment for intraocular inflammation in the clinical setting. AS-OCT based results are comparable and independent of the operator experience and presents clear advantages when compared with the current gold standard. Thus, image analyses of AS-OCT, have the potential to overcome the biases inherent to SUN grading, improving clinical decision making. 
AI and machine learning in diagnostic medical imaging is currently receiving substantial evaluation in different medical fields15 and together with the latest advances in medical imaging has the potential to revolutionize medical diagnostics. Automated and OCT-based tools for future practice in clinical ophthalmology is currently under development.10,1618 Previous groups have been able to demonstrate prosperous achievements in regards to OCT-based uveitis assessment. Two previous groups have been shown correlation between SUN graded slit-lamp assessment and OCT-based uveitis assessment.19,20 Although these studies show the potential of OCT-based cell counting, they are still subjected to many of the biases as slit-lamp based assessment, namely observer variability. Furthermore, manual assessment would require the experience of a trained ophthalmologist/OCT user limiting availability to clinical settings. Automated OCT-based approaches have also been investigated by previous groups. Two groups were able to demonstrate significant correlation between their automated algorithms and clinical SUN grading assessment.18,21 Both Li et al. and Sharma et al. developed image-analysis algorithms. Both groups base their algorithms on segmentation of portions of the AC and identifying hyperreflective spots, which is then used as a representative for the whole AC. This method might share some limitation with slit-lamp based assessment as cell concentration in AC might vary geographically and the correct threshold might vary between scans. 
Accordingly, we developed a neural network-based AI system which analyses the AC in its entirety and efficiently detects particles based on AS-OCT scans. Previous studies have shown very good correlation between manual and automatic segmentation of AC particles. The presented study and AI particle segmentation model replicate these findings and additionally evaluates OCT-based automatic and manual AC inflammation grading. 
Although AI-based clinical diagnosis entails important ethical implications, it has the potential to outperform the clinician22 in providing unbiased and reproducible clinical results, becoming an important tool for future ophthalmology. Technical developments in optical tomography have provided ophthalmologist with high-definition, real time images of the anterior segment structures.23 AI models for analysis of these large collections of data, giving an accurate understanding of the underlying principles at hand. 
Although quantification of anterior chamber inflammation based on AS-OCT shows promising potential, the current OCT resolution does not allow to distinguish between cells of similar sizes or cell types, like inflammatory cells and pigmentary cells. In the future, OCT based particle size thresholding could help in cell characterization of cell groups, such as erythrocytes, pigmentary cells, and immunologic cells that have different size ranges.21 
In cases of very low inflammation, where only a small number of cells are to be found in the chamber, our current method might share some limitations with SUN grading, stemming from the fact that both methods analyze only a portion of the anterior chamber. We are currently investigating incrementing the number of B-scans which could possibly bring a more accurate count. 
AI-based diagnostics is still relatively novel and in a developmental phase. Its results and utilities require comprehensive and extensive validation work in the coming years before it can be utilized independently. Until then, it can be regarded as a novel and experimental part of the diagnostic toolbox in the in ophthalmic practice. 
During development, several custom non-AI particle detection algorithms were explored and evaluated in comparison with AI-based algorithms. Particle segmentation models shared similar limitations in regard to patient cornea artifacts detected as particles. We accommodated for this by slightly decreasing the model's segmented AC area and thereby reducing artifacts present in the particle detection model's analyzed area. The models’ performances were evaluated resulting in the choice of an AI-based model. The AI model's performance was evaluated in a validation subset (85 images) using Jaccard (0.99) and Dice (0.98) coefficients. Furthermore, we evaluated the number of particles missed by the AI-based model's particle detection algorithm due to the smaller analyzed area. In 8.7% of the validation subset, in particular, heavily inflamed eyes, one or more particles were found outside the model's segmented AC. We speculate that such missing particles become inconsequential as the software outputs density metrics (particle/area). 
The AI particle detection algorithm's specificity and sensitivity metrics were calculated as 0.68 and 0.78, respectively. We believe that false positive predictions are heavily impacted by signal-to-noise ratio, and as OCT technology and image resolution improves, less background noise would be present in scans potentially increasing the software's specificity. 
Spearman's rank correlation was used as the measure for correlation between clinical grading and automatic and manual particle segmentation via OCT imaging. Spearman's rank correlation is a nonparametric measure of rank correlation. In our study, we aim to assess the monotonic relationship between a categorical-ordinal variable (SUN grading's 5 increasing gradings) and a numerical-continuous variable (OCT/AI particle count/density). Therefore, it is distinct from agreement estimates (such as Kappa agreement) which requires exclusively ordinal data.24 As such, we have used correlation measure in lieu of agreement coefficient. Furthermore, Spearman's rank correlation is the most commonly metric used in other groups assessing similar relationship between SUN grading and OCT particle count listed in the systematic review by Liu et al.10 Furthermore, we chose to fit a nonlinear exponential regression model on association between clinical grading and automatic and manual particle segmentation based on the intrinsic nature of the exponential step-wise increments in SUN gradings. However, correlation between the two continuous variables of automatic and manual particle segmentation was measured using Pearson's (linear) correlation metric. Consequently, we chose a linear regression fit to model the association between automatic and manual segmentation. 
Our study and the proposed AI system present themselves with a number of limitations. First, our automatic system is at the current moment not able to analyze multiple slides from a single patient’s OCT scan. We are currently implementing this feature and a prototype of the program is already in the works. Second, we have analyzed OCT scans of a small number of enrolled patient subjects. We believe an increment in the number of enrolled patients is exceptionally necessary to evaluate the effectiveness of the AI model's ability to distinguish between cell number among grades. Consequently, we are currently expanding our study to include a larger patient cohort. In addition, the patient population is skewed with a higher proportion of patients with less severe degrees of inflammation in AC. We believe these data prompt further similar investigations in a more inflammatory diverse patient population. Third, the AI cell counter was not able to demonstrate significant differences between all groups. Our AI software was able to record significant differences between observed particles between all SUN gradings, except grading 0 and 0.5+ (p = 0.0617), and 1+ and 2+ (p = 0.8855). We believe the overlapping range could be due to a number of reasons. The intrinsic nature of SUN grading's nonlinear and noncontinuous scale might assume a role in the overlapping. Furthermore, previous studies of the SUN interobserver agreement show a relatively low agreement among uveitis experts, showing a tendency of discrepancy (especially) in the low spectrum of inflammation.11 Multiple research groups that have conducted similar OCT to clinical SUN assessments have reported observed hyper-reflective particles in OCT images of eyes clinically classified as SUN 0 by slit-lamp examination by an experienced ophthalmologist19,25 suggesting the cells might be undetected in cases of very low inflammation during slit-lamp examination. This presented study has observed and reproduced similar events. Furthermore, we were able to record higher correlation and fit of model (Pearson's r = 0.9948, R2 = 0.9897) between automatic and manual segmentation than both automatic segmentation and SUN grading (Pearson's r = 0.7077, R2 = 0.8846) and manual segmentation and SUN grading (Pearson's r = 0.8264, R2 = 0.8698). Similar studies have suggested anterior segment OCT imaging as a promising technique for grading AC cells.18,20,25 Together with our findings, they might suggest OCT imagining offers a more precise evaluation of anterior inflammation. 
As we are determined to formulate a tool for ophthalmologists in a clinical framework, we seek to design the model in accordance with the challenges faced in such settings. Consequently, as diagnosis in low-grade SUN is more clinically relevant, particularly in regard to early treatment, we would like to improve the current practice. That being the case, we believe it is of the highest priority to include a more complete (cellular) validation of the inflammatory conditions of the patients’ eye (e.g. flow cytometric analysis of aqueous humour), of which is the current ongoing effort of our group in the AI validation process. 
This study suggests that image analysis of AS-OCT in combination with AI could possibly be used to detect and quantify anterior chamber inflammation in eyes with clinically graded anterior uveitis. We show that the number of particles and particle density correlates with clinical SUN grading (Fig. 3), whereas the AI observed particles are independent of clinical SUN grading (Fig. 4). The study highlights the possibilities of the methods in providing a robust, fast, noninvasive, and observer-independent assessment of patients with different grades of anterior uveitis, and its potential to become a key tool in the eye clinic in the near future. 
Acknowledgments
Disclosure: M.A. Sorkhabi, None; I.O. Potapenko, None; T. Ilginis, None; M. Alberti, None; J. Cabrerizo, None 
References
Agrawal R, Murthy S, Sangwan V, Biswas J. Current approach in diagnosis and management of anterior uveitis. Indian J Ophthalmol. 2010; 58(1): 11. [CrossRef] [PubMed]
Agrawal R, Murthy S, Ganesh SK, Phaik CS, Sangwan V, Biswas J. Cataract surgery in uveitis. Int J Inflam. 2012; 2012: 548453. [PubMed]
Lee RW, Nicholson LB, Sen HN, et al. Autoimmune and autoinflammatory mechanisms in uveitis. Semin Immunopathol. 2014; 36(5): 581–594. [CrossRef] [PubMed]
Airody A, Heath G, Lightman S, Gale R. Non-Infectious Uveitis: Optimising the Therapeutic Response. Drugs. 2016; 76(1): 27–39. [CrossRef] [PubMed]
Keane PA, Karampelas M, Sim DA, et al. Objective measurement of vitreous inflammation using optical coherence tomography. Ophthalmology. 2014; 121(9): 1706–1714. [CrossRef] [PubMed]
Phulke S, Kaushik S, Kaur S, Pandav SS. Steroid-induced Glaucoma: An Avoidable Irreversible Blindness. J Curr Glaucoma Pract. 2017; 11(2): 67–72. [PubMed]
Youssef J, Novosad SA, Winthrop KL. Infection Risk and Safety of Corticosteroid Use. Rheum Dis Clin North Am. 2016; 42(1): 157–176. [CrossRef] [PubMed]
Eming SA, Martin P, Tomic-Canic M. Wound repair and regeneration: Mechanisms, signaling, and translation. Science Transl Med. 2014; 6(265): 265sr6. [CrossRef]
Jabs DA, Nussenblatt RB, Rosenbaum JT: Standardization of Uveitis Nomenclature (SUN) Working Group. Standardization of Uveitis Nomenclature for Reporting Clinical Data. Results of the First International Workshop. Am J Ophthalmol. 2005; 140(3): 509–516. [PubMed]
Liu X, Solebo AL, Faes L, et al. Instrument-based Tests for Measuring Anterior Chamber Cells in Uveitis: A Systematic Review. Ocul Immunol Inflamm. 2020; 28(6): 898–907. [CrossRef] [PubMed]
Kempen JH, Ganesh SK, Sangwan VS, Rathinam SR. Interobserver agreement in grading activity and site of inflammation in eyes of patients with uveitis. Am J Ophthalmol. 2008; 146(6): 813–818.e1. [CrossRef] [PubMed]
Jabs DA, Dick A, Doucette JT, et al. Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses: The Standardization of Uveitis Nomenclature Experience. Am J Ophthalmol. 2018; 186: 19–24. [CrossRef] [PubMed]
McNeil R. Grading of ocular inflammation in uveitis: an overview. Eye News. 2016; 22(5): 1–4.
Wong IG, Nugent AK, Vargas-Martín F. The effect of biomicroscope illumination system on grading anterior chamber inflammation. Am J Ophthalmol. 2009; 148(4): 516–520.e2. [CrossRef] [PubMed]
Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019; 7: e7702. [CrossRef] [PubMed]
Bhatti U, Akbarali S, Solebo AL, Bryant W. 93 A machine learning approach to classification of uveitis in children using anterior segment OCT images of the eye. In: Abstracts. London, England, UK: BMJ Publishing Group Ltd and Royal College of Paediatrics and Child Health; 2019: A36.2–A37.
Fu H, Baskaran M, Xu Y, et al. A Deep Learning System for Automated Angle-Closure Detection in Anterior Segment Optical Coherence Tomography Images. Am J Ophthalmol. 2019; 203: 37–45. [CrossRef] [PubMed]
Sharma S, Lowder CY, Vasanji A, Baynes K, Kaiser PK, Srivastava SK. Automated Analysis of Anterior Chamber Inflammation by Spectral-Domain Optical Coherence Tomography. Ophthalmology. 2015; 122(7): 1464–1470. [CrossRef] [PubMed]
Igbre AO, Rico MC, Garg SJ. High-speed optical coherence tomography as a reliable adjuvant tool to grade ocular anterior chamber inflammation. Retina. 2014; 34(3): 504–508. [CrossRef] [PubMed]
Invernizzi A, Marchi S, Aldigeri R, et al. Objective Quantification of Anterior Chamber Inflammation. Ophthalmology. 2017; 124(11): 1670–1677. [CrossRef] [PubMed]
Li Y, Lowder C, Zhang X, Huang D. Anterior chamber cell grading by optical coherence tomography. Invest Ophthalmol Vis Sci. 2013; 54(1): 258–265. [CrossRef] [PubMed]
McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020; 577(7788): 89–94. [PubMed]
Kishi S. Impact of swept source optical coherence tomography on ophthalmology. Taiwan Journal of Ophthalmology. 2016; 6(2): 58–68. [PubMed]
Schober P, Boer C, Schwarte LA. Correlation Coefficients: Appropriate Use and Interpretation. Anesthesia & Analgesia. 2018; 126(5): 1763–1768.
Agarwal A, Ashokkumar D, Jacob S, Agarwal A, Saravanan Y. High-speed Optical Coherence Tomography for Imaging Anterior Chamber Inflammatory Reaction in Uveitis: Clinical Correlation and Grading. Am J Ophthalmol. 2009; 147(3): 413–416.e3. [PubMed]
Figure 1.
 
Segmentation of AC in OCT image by AI (magenta box: magnified inserts). – Upper left: AS-OCT image of control eye with SUN(cells) 0 score. Upper right: AS-OCT image of eye with SUN(cells) 3 score. Lower left: AI-segmented (red line) area of AC in control eye with SUN(cells) 0 score. Lower right: AI-segmented (red line) area of AC and AI-detected particles (green dots) in image of eye with SUN(cells) 3 score.
Figure 1.
 
Segmentation of AC in OCT image by AI (magenta box: magnified inserts). – Upper left: AS-OCT image of control eye with SUN(cells) 0 score. Upper right: AS-OCT image of eye with SUN(cells) 3 score. Lower left: AI-segmented (red line) area of AC in control eye with SUN(cells) 0 score. Lower right: AI-segmented (red line) area of AC and AI-detected particles (green dots) in image of eye with SUN(cells) 3 score.
Figure 2.
 
Segmented pixel area – AC area segmented by AI. segmented AC area by AI. Gray – Control cohort 0 SUN(cells)), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 2.
 
Segmented pixel area – AC area segmented by AI. segmented AC area by AI. Gray – Control cohort 0 SUN(cells)), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 3.
 
AI evaluation of patient cohort. – (A) Observed number of particles by AI in segmented AC area. (B) Particle density (ptl/pxl•106) in AC calculated by the AI. (C) Correlation between clinical SUN(cells) grading and number of particles observed by the AI and non-linear exponential fit. (D) Correlation between clinical SUN(cells) grading and particle density and non-linear exponential fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 3.
 
AI evaluation of patient cohort. – (A) Observed number of particles by AI in segmented AC area. (B) Particle density (ptl/pxl•106) in AC calculated by the AI. (C) Correlation between clinical SUN(cells) grading and number of particles observed by the AI and non-linear exponential fit. (D) Correlation between clinical SUN(cells) grading and particle density and non-linear exponential fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 4.
 
Average particle size observed in AC by AI. Average particle size observed for each eye. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 4.
 
Average particle size observed in AC by AI. Average particle size observed for each eye. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars and whiskers denote mean and standard deviation of grading, respectively. n = number of eyes, x̅ = mean of group, σ = standard deviation of group.
Figure 5.
 
Manual evaluation of patient cohort. – (A) The number of manually segmented particles in AC. (B) Correlation between clinical SUN(cells) grading and number of manually segmented particles and nonlinear exponential fit. (C) Correlation between manually segmented particles and particles detected by the AI particle detection software and linear regression fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars denote mean of grading.
Figure 5.
 
Manual evaluation of patient cohort. – (A) The number of manually segmented particles in AC. (B) Correlation between clinical SUN(cells) grading and number of manually segmented particles and nonlinear exponential fit. (C) Correlation between manually segmented particles and particles detected by the AI particle detection software and linear regression fit. Gray – Control cohort 0 SUN(cells), blue – 0 SUN(cells), red - 0.5+ SUN(cells), green - 1+ SUN(cells), magenta – 2+ SUN(cells), orange - 3+ SUN(cells). Bars denote mean of grading.
Table.
 
Clinical and AI Evaluation of Patient Cohort
Table.
 
Clinical and AI Evaluation of Patient Cohort
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×