Open Access
Editorial  |   January 2020
Artificial Intelligence: Quo Vadis?
Translational Vision Science & Technology January 2020, Vol.9, 1. doi:https://doi.org/10.1167/tvst.9.2.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marco A. Zarbin; Artificial Intelligence: Quo Vadis?. Trans. Vis. Sci. Tech. 2020;9(2):1. doi: https://doi.org/10.1167/tvst.9.2.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
The discipline of artificial intelligence (AI) is undergoing exponential growth as illustrated by the sequence of events presented in the Table
Table
 
Brief History of AI
Table
 
Brief History of AI
AI is a disruptive technology, and, as is the case with all such technologies, it creates risks and opportunities. Thus, some experts have expressed concerns that AI may create an I, Robot dystopia or a master–slave relationship between computers and humans. In this issue of Translational Vision Science and Technology, we explore the opportunities and risks that AI presents to the practice of ophthalmology and optometry. In contemplating the role of AI in vision science and medicine, it is important to recognize the differences between intelligence, computational capacity, and learning. 
What Is Intelligence?
Intelligence is an emergent property of unintelligent matter. An emergent property of a system is one that is not a property of any single component of the system but which arises from interactions among the component parts. Ocean waves, for example, are an emergent property of the effects of gravity and wind on water. Temperature (e.g., heat) is an emergent property of the kinetic energy of molecules (e.g., steam). 
What Is Computation?
Computation involves the transformation of one memory state into another.1 Computation is deterministic, that is, the same input always gives the same output. Computational capacity, therefore, is not the same as intelligence. Computations are task specific. It is trivial for a calculator to compute 314,159 × 271,828. For most of us, that calculation is difficult. Would anyone argue that a calculator is more intelligent than a human? In contrast, the human brain performs remarkably well at computations involving the highly complex task of image analysis. We can identify readily the authors of paintings (e.g., Picasso) by their distinctive style. We can even identify the nature of emotional interactions between subjects portrayed in the work (e.g., love between mother and child in Picasso's Mother and Child [1921]). So, computation is not intelligence, and computation is not learning, which is a recursive rearrangement of computational architecture. 
Intelligence is associated with (1) an ability to acquire and apply knowledge and skills (learning), (2) using rules to reach conclusions (reasoning), and (3) self-correction.1 Intelligence is linked to computational capacity. This link is important and helps to explain why AI has become so pervasive at this point in human history. The computational capacity of computers has increased exponentially with the development of integrated circuits and microchips. 
What Is the Computational Limit of an Artificial Neural Network?
Tegmark1 has suggested that Moore's law (the number of transistors in a dense integrated circuit doubles approximately every 2 years) is irrelevant in this regard because integrated circuits are just the current substrate used for computation (preceded by punch cards [electromechanical], relay circuits, vacuum tubes, and transistors), whereas computations are substrate independent. One estimate is that the computational capacity of matter is 1033 times beyond current state of the art.2 The development of quantum computers seems likely to bring this potential to fruition. If this analysis is accurate, AI may be destined to play a major role in all aspects of medical practice, not just image analysis.3 
Why Will AI Influence Ophthalmology and Optometry?
Image analysis plays central role in the diagnosis and management of the leading causes of blindness, namely, glaucoma, diabetic retinopathy, and age-related macular degeneration. Association of images with a differential diagnosis is something that AI may be able to do as well as or better than humans. We tend to collapse visual information, whereas computer vision uses information in each of the millions of pixels that may comprise an image. This fact may explain why AI-directed analysis of fundus images has been able to predict gender, refractive error, blood pressure, and risk of stroke remarkably accurately.410 Optical coherence tomography imaging may even support identification of undiagnosed cases of dementia.1114 Thus, fundus images may enable us to diagnose a variety of systemic diseases, not just conditions such as diabetes mellitus, hypertension, and blood dyscrasias. 
How Will AI Influence Ophthalmology and Optometry?
AI is likely to have an important direct impact on access to the care, monitoring, and treatment of chronic conditions, and clinical trial design. AI can identify common vision threatening diseases with sensitivity and specificity that is comparable to that of experienced clinicians.1524 AI may even predict diabetic retinopathy progression.25 If routine screening could be done outside of doctors’ offices, then clinicians could spend more time treating rather than screening patients. Regarding trial design, greater analytical capacity may improve hypothesis generation and patient selection, as well as support more sensitive clinical outcome monitoring, thus enabling one to design clinical trials of shorter duration with fewer patients enrolled. Through these effects, AI may indirectly increase the amount of innovation in ophthalmology, optometry, and vision science by increasing the amount of time clinicians can dedicate to research, improving data analysis (e.g., in big datasets associated with basic and as well as with clinical research) and, via increased efficiency of clinical trial design and execution, increasing the working capital available to support more extensive efforts in research and product development. 
What Are the Expectations and Limitations of AI?
AI might be able to revolutionize our interactions with the electronic health record and improve our analysis of complex datasets. Imagine, for example, that the electronic health record, using voice recognition, natural language processing, and machine learning paradigms, could introduce the patient to you as you enter the examination room and could list the relevant diagnoses, relevant changes in medical status, most recent treatment received by the patient, and current clinical impression based on an automated analysis of all the imaging data collected thus far during the visit? Or imagine if AI could guide a scientist to choose the most robust method of statistical analysis of the metabolome data generated in a preclinical experiment that tests the effects of mitochondrial rejuvenation on the progression of geographic atrophy? So, AI may enable us to complete our professional tasks more efficiently, but also at a higher level of competence. Despite these possibilities, AI seems unlikely to replace physicians or scientists in the near term any more than it can replace pilots in a cockpit. AI might be able to make an appropriate treatment recommendation for a patient, but at this time, it does not have the capacity to answer the wide variety of questions a patient may have regarding the selection of one among various effective treatment options (e.g., scleral buckle vs. vitrectomy for treating retinal detachment; intraocular vs. topical steroid for treating macular edema), nor can it explain the potential complications of treatment in a manner that is clear and also appropriate for a wide variety of situations (e.g., a cognitively impaired patient accompanied by family members). I imagine that, for the next decade at least, AI can serve as a highly competent partner in our various missions, a partner that will enable us to perform better, possibly at the highest level of which we are capable. 
We are at an inflection point in human evolution that has few parallels (e.g., the development of farming, the development of the printing press, the industrial revolution). We do not always recognize change, however, even if it is monumental. Our perceptual apparatus, despite its many strengths, is limited. Few of us, for example, recognize that we are moving through space around the sun at 108,000 km/h. AI is becoming an integral part of our lives; for example, automated customer service representatives, voice recognition in mobile phones, robot vacuum cleaners, self-driving cars, and computer-assisted diagnostics are ubiquitous. 
This issue of Translational Vision Science and Technology explores various aspects of AI, including the computational architecture underlying this discipline, as well as some current applications to vision science and clinical care. Some of the articles are invited reviews and editorials by leading experts in the field, and others are original research. We hope that these reports will make the field of AI more accessible to clinicians and scientists engaged in vision research and will stimulate additional original contributions in this area. 
References
Tegmark M. Life 3.0: being human in the age of artificial intelligence. New York: Alfred A. Knopf; 2017.
Lloyd S. Ultimate physical limits to computation. Nature. 2000; 406: 1047–1054. [CrossRef] [PubMed]
Hinton G. Deep learning: a technology with the potential to transform health care. JAMA. 2018; 320: 1101–1102. [CrossRef] [PubMed]
McGeechan K, Liew G, Macaskill P, et al. Prediction of incident stroke events based on retinal vessel caliber: a systematic review and individual-participant meta-analysis. Am J Epidemiol. 2009; 170: 1323–1332. [CrossRef] [PubMed]
McGeechan K, Liew G, Macaskill P, et al. Meta-analysis: retinal vessel caliber and risk for coronary heart disease. Ann Intern Med. 2009; 151: 404–413. [CrossRef] [PubMed]
Wong TY, Klein R, Couper DJ, et al. Retinal microvascular abnormalities and incident stroke: the Atherosclerosis Risk in Communities Study. Lancet. 2001; 358: 1134–1140. [CrossRef] [PubMed]
Wong TY, Klein R, Sharrett AR, et al. Retinal arteriolar narrowing and risk of coronary heart disease in men and women. The Atherosclerosis Risk in Communities StudyJAMA. 2002; 287: 1153–1159. [PubMed]
Wong TY, Klein R, Sharrett AR, et al. The prevalence and risk factors of retinal microvascular abnormalities in older persons: the Cardiovascular Health Study. Ophthalmology. 2003; 110: 658–666. [CrossRef] [PubMed]
Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018; 2: 158–164. [CrossRef] [PubMed]
Varadarajan AV, Poplin R, Blumer K, et al. Deep learning for predicting refractive error from retinal fundus images. Invest Ophthalmol Vis Sci. 2018; 59: 2861–2868. [CrossRef] [PubMed]
Thomson KL, Yeo JM, Waddell B, Cameron JR, Pal S. A systematic review and meta-analysis of retinal nerve fiber layer change in dementia, using optical coherence tomography. Alzheimers Dement (Amst). 2015; 1: 136–143. [PubMed]
Mutlu U, Colijn JM, Ikram MA, et al. Association of retinal neurodegeneration on optical coherence tomography with dementia: a population-based study. JAMA Neurol. 2018; 75: 1256–1263. [CrossRef] [PubMed]
Ko F, Muthy ZA, Gallacher J, et al. Association of retinal nerve fiber layer thinning with current and future cognitive decline: a study using optical coherence tomography. JAMA Neurol. 2018; 75: 1198–1205. [CrossRef] [PubMed]
Coppola G, Di Renzo A, Ziccardi L, et al. Optical coherence tomography in Alzheimer's disease: a meta-analysis. PLoS One. 2015; 10: e0134750. [CrossRef] [PubMed]
Tufail A, Rudisill C, Egan C, et al. Automated diabetic retinopathy image assessment software: diagnostic accuracy and cost-effectiveness compared with human graders. Ophthalmology. 2017; 124: 343–351. [CrossRef] [PubMed]
Christopher M, Belghith A, Bowd C, et al. Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Sci Rep. 2018; 8: 16685. [CrossRef] [PubMed]
Kankanahalli S, Burlina PM, Wolfson Y, Freund DE, Bressler NM. Automated classification of severity of age-related macular degeneration from fundus photographs. Invest Ophthalmol Vis Sci. 2013; 54: 1789–1796. [CrossRef] [PubMed]
Shi L, Wu H, Dong J, Jiang K, Lu X, Shi J. Telemedicine for detecting diabetic retinopathy: a systematic review and meta-analysis. Br J Ophthalmol. 2015; 99: 823–831. [CrossRef] [PubMed]
Khouri AS, Szirth BC, Shahid KS, Fechtner RD. Software-assisted optic nerve assessment for glaucoma tele-screening. Telemed J E Health. 2008; 14: 261–265. [CrossRef] [PubMed]
Aslam T, Fleck B, Patton N, Trucco M, Azegrouz H. Digital image analysis of plus disease in retinopathy of prematurity. Acta Ophthalmol. 2009; 87: 368–377. [CrossRef] [PubMed]
Xia T, Patel SN, Szirth BC, Kolomeyer AM, Khouri AS. Software-assisted depth analysis of optic nerve stereoscopic images in telemedicine. Int J Telemed Appl. 2016; 2016: 7603507. [PubMed]
De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018; 24: 1342–1350. [CrossRef] [PubMed]
Chakravarthy U, Goldenberg D, Young G, et al. Automated identification of lesion activity in neovascular age-related macular degeneration. Ophthalmology. 2016; 123: 1731–1736. [CrossRef] [PubMed]
Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316: 2402–2410. [CrossRef] [PubMed]
Arcadu F, Benmansour F, Maunz A, Willis J, Haskova Z, Prunotto M. Deep learning algorithm predicts diabetic retinopathy progression in individual patients. NPJ Digit Med. 2019; 2: 92. [CrossRef] [PubMed]
Table
 
Brief History of AI
Table
 
Brief History of AI
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×