May 2024
Volume 13, Issue 5
Open Access
Retina  |   May 2024
Prediction of Visual Outcome After Rhegmatogenous Retinal Detachment Surgery Using Artificial Intelligence Techniques
Author Affiliations & Notes
  • Hui Guo
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
    Guangzhou Panyu Aier Eye Hospital, Guangzhou, China
  • Chubin Ou
    Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
  • Guangyi Wang
    Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
  • Bingxing Lu
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
  • Xinyu Li
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
  • Tinghua Yang
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
  • Jinglin Zhang
    Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
  • Correspondence: Jinglin Zhang, Department of Retina, Guangzhou Aier Eye Hospital, Jinan University, 191 Huanshi Middle Road, Yuexiu District, Guangzhou 51000, China. e-mail: zhjinglin@126.com 
  • Footnotes
     HG and CO contributed equally to this work and were designated as co-first authors.
Translational Vision Science & Technology May 2024, Vol.13, 17. doi:https://doi.org/10.1167/tvst.13.5.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hui Guo, Chubin Ou, Guangyi Wang, Bingxing Lu, Xinyu Li, Tinghua Yang, Jinglin Zhang; Prediction of Visual Outcome After Rhegmatogenous Retinal Detachment Surgery Using Artificial Intelligence Techniques. Trans. Vis. Sci. Tech. 2024;13(5):17. https://doi.org/10.1167/tvst.13.5.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: This study aimed to develop artificial intelligence models for predicting postoperative functional outcomes in patients with rhegmatogenous retinal detachment (RRD).

Methods: A retrospective review and data extraction were conducted on 184 patients diagnosed with RRD who underwent pars plana vitrectomy (PPV) and gas tamponade. The primary outcome was the best-corrected visual acuity (BCVA) at three months after the surgery. Those with a BCVA of less than 6/18 Snellen acuity were classified into a vision impairment group. A deep learning model was developed using presurgical predictors, including ultra-widefield fundus images, structural optical coherence tomography (OCT) images of the macular region, age, gender, and preoperative BCVA. A fusion method was used to capture the interaction between different modalities during model construction.

Results: Among the participants, 74 (40%) still had vision impairment after the treatment. There were significant differences in age, gender, presurgical BCVA, intraocular pressure, macular detachment, and extension of retinal detachment between the vision impairment and vision non-impairment groups. The multimodal fusion model achieved a mean area under the curve (AUC) of 0.91, with a mean accuracy of 0.86, sensitivity of 0.94, and specificity of 0.80. Heatmaps revealed that the macular involvement was the most active area, as observed in both the OCT and ultra-widefield images.

Conclusions: This pilot study demonstrates that artificial intelligence techniques can achieve a high AUC for predicting functional outcomes after RRD surgery, even with a small sample size. Machine learning methods identified The macular region as the most active region.

Translational Relevance: Multimodal fusion models have the potential to assist clinicians in predicting postoperative visual outcomes prior to undergoing PPV.

Introduction
Retinal detachment is a status when the neurosensory retina separates from the underlying retinal pigmental epithelium. It can be classified into three types: rhegmatogenous retinal detachment (RRD), tractional retinal detachment, and exudative retinal detachment. RRD is the most common form.1 The incidence of RRD has been reported to range from 6.3 to 17.9 cases per 100,000 population in Europe, America, and Asia.2 In China, an estimated 9,000 to 10,000 new cases of RRD were diagnosed each year in 2001, with a likely increase in incidence over the past decade due to the rapid rise in myopia prevalence.3,4 
Treatment options for RRD include scleral buckling (SB), pars plana vitrectomy (PPV), a combination of PPV and SB, and pneumoretinopexy (applied for superior RRD).5 In the study by Xu et al.,6 the average three-month visual outcome of 92 eyes after PPV followed by air filling was 0.74 logMAR. Previous studies have identified several prognostic factors for worse postsurgery visual outcomes. These factors include older age,7 poor preoperative vision,8,9 longer duration of macular detachment,7,10 increased number of retinal breaks,11 larger retinal holes,7 larger retinal detachment size,8,10 presence of proliferative vitreoretinopathy,7 and higher degree of macular detachment.12 Some biomarkers captured by optical coherence tomography (OCT), such as integrity of the intermediate line, inner/outer segment junction or external limiting membrane,10,13 thickness of each layer of the neurosensory retina,14 outer retinal corrugation15 and distance between central fovea and the nearest undetached retina,16 have been reported to predict functional outcomes. 
Although previous studies have identified some clinical and fundus predictors for visual outcomes after RRD surgeries, there are limitations and inconsistencies among the studies. Only a few studies have reported the efficiency of prediction models, and the predictors varied across studies. Recently, artificial intelligence (AI) has been applied to eye images for diagnosing and predicting clinical outcomes in various eye diseases, including cataracts, age-related macular degeneration, and diabetic retinopathy.17 However, limited research has applied AI to predict outcomes after RRD surgery. One study used AI to predict the probability of retinal reattachment, but it relied on manually drawn images, which may introduce inaccuracies in data entry.18 Therefore the objective of this study is to use AI on fundus images to predict visual outcomes after the successful repair of RRD. 
Methods
Study Design and Patients
The medical records of adult patients diagnosed with RRD in Guangzhou Aier Eye Hospital between October 1, 2017, and September 30, 2022, were retrospectively reviewed. Only those who underwent PPV followed by gas tamponade performed by Dr. Zhang, a retinal surgeon with 20 years of experience, were included. All the surgeries were performed within one month of the symptom onset, and patients who achieved complete retinal reattachment within three months after the treatment were included in the analyses. Exclusion criteria comprised open globe injuries, glaucoma, severe cataract, uveitis, strabismus, and congenital eye diseases. The study adhered to the principles of the Declaration of Helsinki. This research received approval from the hospital's ethical committee (GZAIER2018IRB06), and the requirement for informed consent was waived because of the retrospective nature of the study design. 
Surgical Procedures
All procedures were executed under regional anesthesia using an injection of 2 mL of 0.5% bupivacaine and 2 mL of 2% lidocaine. A regular three-port 25-gauge PPV was performed using the Alcon Constellation system, along with a non-contact wide viewing lens (LPU CLA 200). Triamcinolone acetonide was used to assist in the removal of both central and peripheral vitreous. A 360° scleral indentation was performed to shave the vitreous base and release vitreous traction around the retinal tear. Subsequently, a fluid-air exchange was initiated at a pressure of 45 mmHg, and subretinal fluid was drained using a flute needle through the primary retinal break. Endolaser was then used to create 3-4 rows of laser photocoagulation around the retinal break. Finally, the intraocular pressure was set to 35 mmHg for inferior breaks and 30 mmHg for superior breaks. The three-port scleral incisions were closed with 8-0 absorbable sutures. Patients were nursed in a face-down or lateral position to ensure that the air tamponade fully compressed the retinal breaks. 
Data Collections
Patients were classified as having vision impairment if their best corrected visual acuity (BCVA) three months after the initial surgery was below 6/18 Snellen acuity (0.5 LogMAR),19 and the rest were defined as vision non-impairment. Presurgery spectral domain OCT (Heidelberg Engineering, Heidelberg, Germany) images of the macular, captured at 0° and 90°, were selected. Additionally, images of the primary gaze were obtained using 200-degree retinal photography (Optos Daytona P200T, Scotland, UK). Other presurgery characteristics included age, sex, history of systemic disease, BCVA, intraocular pressure (measured with Topcon CT80; Topcon Optical Company, Tokyo, Japan), high myopia (spherical equivalent ≤ −5.00 D, measured with Topcon KR 800; Topcon Optical Company), lens status(pseudophakic/phakic), vitreous hemorrhage detected with slit lamp fundus biomicroscopy, macular status (on/off) determined with OCT, number of retinal breaks, location of retinal breaks (up/down/both), and extent of retinal detachment (one to four quadrants). Eyes were dilated for examinations, including slit-lamp biomicroscopy, OCT, retinal photography, and autorefraction. Documentation of retinal breaks and detachment extent relied on ultra-widefield retinal images and slit-lamp fundus biomicroscopy. Ultra-widefield retinal images were available for all patients, whereas 31 individuals (16.8%) lacked OCT images because of severe retinal detachment. 
Deep Learning Model Development and Training
To address the multifaceted nature influencing surgical prognosis, we developed a deep learning multimodal model that integrates diverse data sources. These include pre-surgery macular OCT images, ultra-widefield retinal photos, gender, age, and pre-surgery BCVA. Illustrated in Figure 1, our network features three input branches: two image branches accepting OCT and ultra-widefield retinal photos, and one meta-information branch taking gender, age, and presurgery BCVA as inputs. 
Figure 1.
 
Overall network architecture.
Figure 1.
 
Overall network architecture.
The image branches share a ResNet architecture, encompassing four stages of convolution and max pooling followed by global average pooling. The meta-information branch comprises a linear layer, batch normalization layer, and ReLU activation. To accommodate missing OCT data, we used a blank image of the same size filled with zero values. 
Although conventional modality fusion involves concatenating feature vectors, potentially overlooking intermodality interactions, we introduced a multimodal fusion module. Building on the approach in Tensor Fusion Network,20 we extended the dimension of feature embeddings by a constant value of 1. Bimodal interaction was captured through the outer product of corresponding feature embeddings, whereas trimodal interaction was calculated as the outer product of the three modalities, as described below:  
\begin{eqnarray*} {F^{oct}} \in \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{oct}}}\\ 1 \end{array}} \right],{\rm{\;}}{F^{oct}} \in \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{wf}}}\\ 1 \end{array}} \right],{\rm{\;}}{F^{meta}} \in \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{meta}}}\\ 1 \end{array}} \right], \end{eqnarray*}
 
\begin{eqnarray*} {F^m} = \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{oct}}}\\ 1 \end{array}} \right] \otimes \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{wf}}}\\ 1 \end{array}} \right] \otimes \left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{meta}}}\\ 1 \end{array}} \right]. \end{eqnarray*}
 
To mitigate the exponential growth in dimension and computational complexity associated with tensor fusion, we employed the Low-rank fusion method.21 This involved reducing the dimension of embeddings to address potential challenges. Utilizing modality-specific tensors, each modality embedding underwent multiplication, projecting a high-dimensional tensor to a lower-dimensional embedding before the outer product operation. This strategic operation substantially decreased the number of parameters in the network, minimizing the risk of overfitting while preserving the essential aspects of multimodal interaction:  
\begin{eqnarray*} {F^m} \cdot Z &=& \left( {\left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{oct}}}\\ 1 \end{array}} \right] \cdot {z^{oct}}} \right) \circ \left( {\left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{wf}}}\\ 1 \end{array}} \right] \cdot {z^{wf}}} \right)\\ && \circ \left( {\left[ {\begin{array}{@{}*{1}{c}@{}} {{F^{meta}}}\\ 1 \end{array}} \right] \cdot {z^{meta}}} \right). \end{eqnarray*}
 
The fused feature was then passed into a classification head composed of a linear layer and sigmoid activation, forming the overall network architecture shown in Figure 1. Predictions were categorized into visual impairment (BCVA < 6/18 Snellen acuity) and visual non-impairment (BCVA ≥ 6/18 Snellen acuity). Because this is a binary outcome, a binary cross-entropy loss function was employed, where q represents the ground truth of the prognosis, and p is the probability of a favorable prognosis predicted by the model. The training used an ADAM optimizer with a learning rate of 0.0001 over 100 epochs:  
\begin{eqnarray*} {L_{bce}} = \sum - \left[ {q \cdot logp + \left( {1 - q} \right) \cdot \log \left( {1 - p} \right)} \right]. \end{eqnarray*}
 
Deep Learning Model Evaluation
Given the relatively small dataset, we implemented a fivefold cross-validation strategy to mitigate the risk of overfitting.22 The entire dataset, consisting of 180 cases, was divided into five folds. One fold served as the test set, whereas the remaining four folds were used as training sets. The model underwent training on these sets and evaluation on the designated test set. This process iterated, with a different fold held out as the test set in each iteration, and the average performance across the cross-validation was reported. 
To assess the efficacy of our proposed method, we compared the performance of the model with and without the multimodal fusion module. Additionally, we examined the value added by including different combinations of modalities, comparing models using one or two modalities only. To further illustrate that our proposed method has effectively learned meaningful features, we used the Grad-CAM method to generate the activation map of the network.23 
Results
Patients Characteristics
Records from 189 patients were reviewed, and five were excluded due to a lack of follow-up between one and three months. The study ultimately enrolled 184 patients, consisting of 74 females and 110 males. Among them, 74 patients experienced vision impairment, with a postsurgery BCVA (best-corrected visual acuity) < 6/18 Snellen acuity, whereas the remaining 110 patients exhibited BCVA of ≥ 6/18 Snellen acuity. The vision impairment group had a statistically younger mean age (P = 0.015), and a higher percentage of females (P = 0.011) compared to the visual non-impairment group. Moreover, the average pre-surgery BCVA was significantly lower in the vision impairment group (P < 0.001). Before surgery, 88% of patients in the vision impairment group had macular detachment, whereas only 56% had it in the vision non-impairment group (P < 0.001). The number of retinal detachment quadrants differed significantly between the two groups (P < 0.001). Almost 70% of the vision impairment patients had a retinal detachment ≥3 quadrants, and only 33% in the vision non-impairment group. The presence of diabetes mellitus, high myopia, pseudophakia, and vitreous hemorrhage showed comparable proportions between the two groups. Furthermore, the location and count of retinal breaks showed no significant difference between the two groups. A comprehensive overview of patient characteristics prior to surgery is provided in Table 1
Table 1.
 
Patient Characteristics Before Surgery
Table 1.
 
Patient Characteristics Before Surgery
Model Performance
Table 2 presents the average performance of various models. Models exclusively using OCT or ultra-wide-field images achieved mean AUC values of 0.78 and 0.77, respectively. Incorporating age, gender, and preoperative BCVA through direct concatenation, the mean AUC improved to 0.88. The performance was further elevated to 0.91 using the fusion method. 
Table 2.
 
Performance of Different Models
Table 2.
 
Performance of Different Models
Heatmaps
The heatmaps displayed in Figure 2 offer a visual depiction of the regions exhibiting neuron activity across various images. On OCT images, the model's focus is directed towards the macular region, where there is either a separation between the inner and outer retinal layers or noticeable variations in the thickness of the neurosensory retina. In the case of ultra-widefield retinal photos, the model emphasizes regions associated with retinal detachment. 
Figure 2.
 
Heatmaps generated with the Grad-CAM technique for OCT and Ultra-widefield images. In the OCT images, the highlighted regions of neural activity were centered around the retinal detachment on the fovea (A, B), as well as the thickening of the macular retina (C). The regions of heightened activity corresponded to the retinal detachment in ultra-widefield images (D, E, F).
Figure 2.
 
Heatmaps generated with the Grad-CAM technique for OCT and Ultra-widefield images. In the OCT images, the highlighted regions of neural activity were centered around the retinal detachment on the fovea (A, B), as well as the thickening of the macular retina (C). The regions of heightened activity corresponded to the retinal detachment in ultra-widefield images (D, E, F).
Discussion
This study used a deep learning model to predict visual outcomes after retinal reattachment surgery. Our model incorporated fundus images, OCT, age, gender, and preoperative BCVA through a fusion module method. The results were significant, with an average AUC of 0.91, accuracy of 0.86, sensitivity of 0.93, and specificity of 0.80. The active regions of heatmaps represent the most relevant areas.24 In our study, they included macular changes on the OCT images and retinal detachment on the ultra-widefield images. 
Central retina involvement has been reported to significantly impact functional outcomes after retinal reattachment surgery, both in the short and long term.25,26 Higher height of macular detachment has been associated with worse postoperative vision.27 These findings align with the active regions depicted in the OCT heatmaps in our study. Furthermore, younger age and better preoperative visual acuity have been linked to better functional outcomes after retinal reattachment.11,28 These findings were consistent with our results, demonstrating improved predictive power when age and baseline BCVA were included in the deep learning model with fundus images of OCT and ultra-widefield photographer. In addition, compared to direct concatenation, the fusion method applied in this study boosted the predictive power. 
Previous studies primarily relied on information extracted by doctors to predict outcomes after retinal reattachment surgery. A meta-analysis identified several valuable OCT-related predictors, such as fovea detachment height, central macular thickness, breaks in the ellipsoid zone and/or external limiting membrane, presence of intraretinal cystic cavities, outer retinal corrugations, and macular off.29 These predictors align with the heat maps generated in our study, which focused on the macular region and detached retinal lesions. In cases where multiple biomarkers point in different directions, such as the presence of cystic cavities and the absence of ellipsoid zone breaks, predicting outcomes becomes challenging. A study conducted by Yorston et al.7 included 2074 cases and used the traditional multivariate regression method to predict visual outcomes after retinal detachment surgery. The AUC (0.72) was relatively low even with such a large sample size. In contrast, the information extracted from OCT images using AI techniques in this study achieved an AUC of just below 0.8. This demonstrated that AI techniques can evaluate all the information as a whole, enabling more straightforward, quicker, and more accurate predictions. 
Using ultra-widefield fundus images combined with machine learning techniques has shown promising accuracy in detecting retinal detachment.30 Nevertheless, few studies have used these images to predict postoperative functional outcomes. In our study, models utilizing ultra-widefield retinal images exhibited comparable AUC values to those using OCT images (0.77 vs. 0.78). However, it is possible that AI using OCT images could achieve higher accuracy with larger training datasets, as indicated by previous studies focusing on subtle changes in the affected retina. Although AI has been applied to diagnose and predict various ocular disorders,31 there is limited research on using AI to predict outcomes of retinal reattachment surgery. A pilot study by Fung et al.18 used simulated fundus photos generated by drawing tools to predict complete retinal reattachment after surgery, achieving an AUC of 0.94. Despite our model having a relatively minor AUC value, it has strengths, such as producing an AUC of 0.91 with a significantly smaller sample size compared to the study by Fung et al.18 Additionally, we directly used measurement information, including OCT and ultra-widefield fundus images, avoiding potential errors introduced during the recording process in the study by Fung et al.18 
Although our study has shown promising outcomes in using AI to predict visual outcomes after retinal reattachment surgery, several limitations should be acknowledged. First, the small sample size of 184 cases limited the ability of AI to identify subtle retinal lesions that have been identified as biomarkers in previous studies. Second, the data were derived from a specific cohort treated at the same hospital by the same doctor, which may limit the generalizability of the model to other settings and populations. However, despite these limitations, our study highlights the potential of using AI to predict surgical outcomes, focusing specifically on the features extracted from OCT and fundus images while minimizing the influence of confounding factors. Moreover, models based solely on OCT or ultra-widefield images yielded less promising AUC values (below 0.8), possibly because of the limited sample size or the algorithm used. Third, it is essential to note that the visual outcomes were only recorded up to three months postsurgery, and predictions may differ in prolonged follow-up periods. Last, the models did not include the duration of macular detachment. Referring to the meta-analysis by van Bussel et al.,32 macular detachment lasting more than three to seven days significantly impacted visual outcomes. However, most retinal detachment patients in our hospital and other settings in China did not seek medical attention within the first week after symptom onset. Instead, they reported the symptom in weeks.33,34 Given the above reasons, our focus was on patients treated within one month of symptom onset. 
In conclusion, our study demonstrates that the deep learning multimodal model, primarily based on OCT and ultra-widefield fundus images, can accurately predict visual outcomes after PPV followed by gas tamponade for RRD, even with a relatively small sample size. The study also highlights the potential of AI in enhancing prognostic capabilities in this context. However, further research should aim to include more extensive and diverse cohorts with more extended follow-up periods to validate and refine the model's predictive accuracy. 
Acknowledgments
The authors thank Cheng Lu for thoroughly reviewing and providing valuable feedback on our manuscript at the Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University. 
Supported by the Science and Technology Program of Guangzhou, Guangdong Province, China (202201020075), the National Natural Science Fund (82302300), and the Guangdong Natural Science Foundation (2022A1515011252). 
Disclosure: H. Guo, None; C. Ou, None; G. Wang, None; B. Lu, None; X. Li, None; T. Yang, None; J. Zhang, None 
References
Feltgen N, Walter P. Rhegmatogenous retinal detachment–an ophthalmologic emergency. Dtsch Arztebl Int. 2014; 111(1–2): 12–21; quiz 2. [PubMed]
Mitry D, Charteris DG, Fleck BW, et al. The epidemiology of rhegmatogenous retinal detachment: geographical variation and clinical associations. Br J Ophthalmol. 2010; 94: 678–684. [CrossRef] [PubMed]
Li X. Incidence and epidemiological characteristics of rhegmatogenous retinal detachment in Beijing, China. Ophthalmology. 2003; 110: 2413–2417. [PubMed]
Tang Y, Chen A, Zou M, et al. Prevalence and time trends of refractive error in Chinese children: a systematic review and meta-analysis. J Glob Health. 2021; 11: 08006. [CrossRef] [PubMed]
Sultan ZN, Agorogiannis EI, Iannetta D, et al. Rhegmatogenous retinal detachment: a review of current practice in diagnosis and management. BMJ Open Ophthalmol. 2020; 5(1): e000474. [CrossRef] [PubMed]
Xu C, Wu J, Li Y, et al. Clinical characteristics of primary pars plana vitrectomy combined with air filling for rhegmatogenous retinal detachment. Sci Rep. 2022; 12(1): 7916. [CrossRef] [PubMed]
Yorston D, Donachie PHJ, Laidlaw DA, et al. Factors affecting visual recovery after successful repair of macula-off retinal detachments: findings from a large prospective UK cohort study. Eye (Lond). 2021; 35: 1431–1439. [CrossRef] [PubMed]
Gopal A, Starr M, Obeid A, et al. Predictors of vision loss after surgery for macula-sparing rhegmatogenous retinal detachment. Curr Eye Res. 2022; 47: 1209–1217. [CrossRef] [PubMed]
Doyle E, Herbert EN, Bunce C, et al. How effective is macula-off retinal detachment surgery. Might good outcome be predicted? Eye (Lond). 2007; 21: 534–540. [CrossRef] [PubMed]
Park DH, Choi KS, Sun HJ, Lee SJ. Factors associated with visual outcome after macula-off rhegmatogenous retinal detachment surgery. Retina. 2018; 38: 137–347. [CrossRef] [PubMed]
Geiger M, Smith JM, Lynch A, et al. Predictors for recovery of macular function after surgery for primary macula-off rhegmatogenous retinal detachment. Int Ophthalmol. 2020; 40: 609–616. [CrossRef] [PubMed]
Mowatt L, Tarin S, Nair RG, et al. Correlation of visual recovery with macular height in macula-off retinal detachments. Eye (Lond). 2010; 24: 323–327. [CrossRef] [PubMed]
Gharbiya M, Grandinetti F, Scavella V, et al. Correlation between spectral-domain optical coherence tomography findings and visual outcome after primary rhegmatogenous retinal detachment repair. Retina. 2012; 32: 43–53. [CrossRef] [PubMed]
Danese C, Lanzetta P. Optical coherence tomography findings in rhegmatogenous retinal detachment: a systematic review. J Clin Med. 2022; 11(19): 5819. [CrossRef] [PubMed]
Cho M, Witmer MT, Favarone G, et al. Optical coherence tomography predicts visual outcome in macula-involving rhegmatogenous retinal detachment. Clin Ophthalmol. 2012; 6: 91–96. [PubMed]
Lecleire-Collet A, Muraine M, Menard JF, Brasseur G. Predictive visual outcome after macula-off retinal detachment surgery using optical coherence tomography. Retina. 2005; 25: 44–53. [CrossRef] [PubMed]
Vujosevic S, Limoli C, Luzi L, Nucci P. Digital innovations for retinal care in diabetic retinopathy. Acta Diabetol. 2022; 59: 1521–1530. [CrossRef] [PubMed]
Fung THM, John N, Guillemaut JY, et al. Artificial intelligence using deep learning to predict the anatomical outcome of rhegmatogenous retinal detachment surgery: a pilot study. Graefes Arch Clin Exp Ophthalmol. 2023; 261: 715–721. [CrossRef] [PubMed]
Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. Lancet Glob Health. 2021; 9(2): e144–e160. [CrossRef] [PubMed]
Zadeh A, Chen M, Poria S, et al. Tensor fusion network for multimodal sentiment analysis. arXiv preprint arXiv:170707250 2017.
Liu Z, Shen Y, Lakshminarasimhan VB, Liang PP, Zadeh A, Morency LP. Efficient low-rank multimodal fusion with modality-specific factor. arXiv preprint arXiv:180600064 2018 May 31 2018.
Molinaro AM, Simon R, Pfeiffer RM. Prediction error estimation: a comparison of resampling methods. Bioinformatics. 2005; 21: 3301–3307. [CrossRef] [PubMed]
Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis. 2020; 128: 336–359. [CrossRef]
Keel S, Wu J, Lee PY, et al. Visualizing deep learning models for the detection of referable diabetic retinopathy and glaucoma. JAMA Ophthalmol. 2019; 137: 288–292. [CrossRef] [PubMed]
Rezar S, Sacu S, Blum R, et al. Macula-on versus macula-off pseudophakic rhegmatogenous retinal detachment following primary 23-gauge vitrectomy plus endotamponade. Curr Eye Res. 2016; 41: 543–550. [PubMed]
Borowicz D, Nowomiejska K, Nowakowska D, et al. Functional and morphological results of treatment of macula-on and macula-off rhegmatogenous retinal detachment with pars plana vitrectomy and sulfur hexafluoride gas tamponade. BMC Ophthalmol. 2019; 19: 118. [CrossRef] [PubMed]
Ross W, Lavina A, Russell M, Maberley D. The correlation between height of macular detachment and visual outcome in macula-off retinal detachments of < or = 7 days' duration. Ophthalmology. 2005; 112: 1213–1217. [CrossRef] [PubMed]
Liu F, Meyer CH, Mennel S, et al. Visual recovery after scleral buckling surgery in macula-off rhegmatogenous retinal detachment. Ophthalmologica. 2006; 220: 174–180. [CrossRef] [PubMed]
Murtaza F, Goud R, Belhouari S, et al. Prognostic features of preoperative OCT in retinal detachments: a systematic review and meta-analysis. Ophthalmol Retina. 2023; 7: 383–397. [CrossRef] [PubMed]
Ohsugi H, Tabuchi H, Enno H, Ishitobi N. Accuracy of deep learning, a machine-learning technology, using ultra-wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci Rep. 2017; 7(1): 9425. [CrossRef] [PubMed]
Hogarty DT, Mackey DA, Hewitt AW. Current state and future prospects of artificial intelligence in ophthalmology: a review. Clin Exp Ophthalmol. 2019; 47: 128–139. [CrossRef] [PubMed]
van Bussel EM, van der Valk R, Bijlsma WR, La Heij EC. Impact of duration of macula-off retinal detachment on visual outcome: a systematic review and meta-analysis of literature. Retina. 2014; 34: 1917–1925. [CrossRef] [PubMed]
Zheng L, Wu C, Luo M, et al. Air tamponade in vitrectomies for primary rhegmatogenous retinal detachment caused by superior breaks. Medicine (Baltimore). 2023; 102(43): e35546. [CrossRef] [PubMed]
Wang X, Zhang T, Tang WY, Huang X. Clinical manifestations and surgical outcomes of primary rhegmatogenous retinal detachment in patients < 30 years of age with high myopia. Biomed Environ Sci. 2023; 36: 644–648. [PubMed]
Figure 1.
 
Overall network architecture.
Figure 1.
 
Overall network architecture.
Figure 2.
 
Heatmaps generated with the Grad-CAM technique for OCT and Ultra-widefield images. In the OCT images, the highlighted regions of neural activity were centered around the retinal detachment on the fovea (A, B), as well as the thickening of the macular retina (C). The regions of heightened activity corresponded to the retinal detachment in ultra-widefield images (D, E, F).
Figure 2.
 
Heatmaps generated with the Grad-CAM technique for OCT and Ultra-widefield images. In the OCT images, the highlighted regions of neural activity were centered around the retinal detachment on the fovea (A, B), as well as the thickening of the macular retina (C). The regions of heightened activity corresponded to the retinal detachment in ultra-widefield images (D, E, F).
Table 1.
 
Patient Characteristics Before Surgery
Table 1.
 
Patient Characteristics Before Surgery
Table 2.
 
Performance of Different Models
Table 2.
 
Performance of Different Models
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×