Open Access
Special Issue  |   August 2020
Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient
Author Affiliations & Notes
  • Rishi P. Singh
    Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
  • Grant L. Hom
    Case Western Reserve University School of Medicine, Cleveland, OH, USA
  • Michael D. Abramoff
    Retina Service, Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
    Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
    Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
    Digital Technologies Inc (formerly Idx), Coralville, IA, USA
  • J. Peter Campbell
    Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
  • Michael F. Chiang
    Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
    Department Medical Informatics & Clinical Epidemiology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
  • Correspondence: Rishi P. Singh, 9500 Euclid Avenue, Desk i32, Cleveland, OH 44195, USA. e-mail: singhr@ccf.org 
Translational Vision Science & Technology August 2020, Vol.9, 45. doi:https://doi.org/10.1167/tvst.9.2.45
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rishi P. Singh, Grant L. Hom, Michael D. Abramoff, J. Peter Campbell, Michael F. Chiang, on behalf of the AAO Task Force on Artificial Intelligence; Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient. Trans. Vis. Sci. Tech. 2020;9(2):45. https://doi.org/10.1167/tvst.9.2.45.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Introduction
Artificial intelligence (AI), or the use of automated systems that display the ability to correctly interpret, to learn from, and to achieve specific goals by use of external data, is an emerging technology that has myriad implications for changing the way we interact with the world. Although this technology is already being used in many fields such as banking, retail, and education, AI has the potential to transform other fields including healthcare. Within healthcare, ophthalmology is uniquely positioned to benefit from AI not only through clinical decision support technology but also through improved image processing innovations such as real-time segmentation, automated image quality improvements, and assisted or autonomous disease screening tools.1,2 Although there are now Food and Drug Administration–approved technologies within ophthalmology, such as IDx-DR (Coralville, IA, USA) for early diagnosis of diabetic retinopathy and diabetic macular edema, numerous challenges still exist to realize the potentially transformative impact of these technologies in day to day practice. 
The need for the ophthalmology community to take a thoughtful approach to AI innovation and implementation is accentuated by the high stakes involved. The impact of misleading patients and clinicians on a health condition is much greater than a retail store misinterpreting the next book you may like to buy. As a result, we need to increase discussion about the issues surrounding who, what, when, how, and why we might use AI in practice, including ethical and liability considerations, to determine how best to implement AI for all stakeholders including practitioners, patients, practices/hospitals, and industry. This article aims to highlight the challenges and barriers to real-world AI adoption that impact the technology's utility. We examine these specific challenges that will face health care organizations, providers, and patients. 
The Challenge for Healthcare Organizations
Healthcare organizations and medical practices will not only be adopters of industry developed AI platforms, they will also be innovators in the development of novel AI platforms. For example, health systems can create independent AI for many aspects of their organizations such as billing. For clinical care, where regulatory approval is needed, health systems will play important research and development roles for new AI technology. Because health systems will be involved in both adopting and creating new AI platforms, organizations should consider the different challenges each option may present. 
Considerations for Adopting Existing AI Technology
To successfully implement industry-developed AI, collaboration, and, transparency with vendors will be critical because of the potential liability healthcare systems take when utilizing AI technology for patient care. Moreover, the AI business is rapidly evolving, and identifying leading AI vendors will be challenging early on. One challenge for individual organizations is to determine how they should assess different vendors of AI platforms. Notably, the lack of established AI suppliers may make healthcare vulnerable to companies exaggerating their offerings with limited understanding of how to apply AI's abilities to healthcare needs. Early AI offerings may lack features such as interoperability and integration with existing electronic infrastructures and electronic health record (EHR) systems.3 Furthermore, because of regulatory considerations, initial AI products will necessarily have narrow clinical utility (e.g., detection of referable diabetic retinopathy but not other retinal or ophthalmic disease), whereas what might most benefit the organization and society would be a broader use case and product. Therefore many opportunities exist for health systems and industry to codesign systems that are most clinically useful for providers. 
Inherently, some AI algorithms, such as convolutional neural networks are “black boxes” in terms of which features are used to make decisions in AI models. Regulatory agencies are focused on safety and efficacy of these systems for a particular use case, but the healthcare community needs to carefully consider the relative risks and benefits of accepting an “uninterpretable” device compared with the current standard of care. It may be that we are willing to tolerate agnosticism of algorithm features if outcomes are improved in a meaningful way.4 On the other hand, the art of medicine has always allowed providers to use their judgment to tailor clinical care to an individual patient, which may be in an algorithm. Although we as a field consider the acceptability of block-box algorithms,4 advances in computer science and a push for interpretability from a regulatory perspective will likely lead to more explainable AI in the future. 
In healthcare, very few technologies become commonplace without favorable financial reimbursement models, and it remains to be seen how this will work for AI. Telemedicine is a perfect example. Despite clear cost-effectiveness for remote delivery of care models, it wasn't until the recent worldwide COVID-19 pandemic that reimbursement for telemedicine services encouraged widespread utilization. In a relative value unit (RVU)–driven reimbursement system, we need to advocate for reimbursements that appropriately incentivize AI technology that leads to cost savings through improved efficiency, outcomes, or access to care. The American Medical Association Digital Medicine Payment Advisory Group and the U.S. Congress have been working on payment models based on the CPT(R) system, but reimbursement solutions remain unclear. The uncertainty regarding financial reimbursement may affect whether organizations choose to be early adopters of these technologies. 
Considerations for Organizations Developing Their Own AI Technology
Developing organization-specific AI technology has a whole host of advantages such as system customizability to serve an organization's unique needs and better interoperability with the organization's existing infrastructure. However, organizations that choose to develop their own AI systems should be aware of the greater technical complexities of AI compared to previous technological innovations. One key component to successfully develop AI systems independently involves having the workforce to build, maintain, and improve AI systems. Knowledgeable personnel are likely to be in short supply during the early development of new systems, with Deloitte reporting that 68% of United States Information technology and related-services business leaders in their state of AI enterprise survey are concerned about a moderate to extreme AI skills gap.5 Consequently, will an AI skills gap limit an organization's adoption and development of AI technology? Each organization can develop some implementations to address this, but a sizable, knowledgeable workforce is needed to develop these systems and to teach non-experts (e.g. clinicians and support staff) how to use the technology and to quickly address problems clinicians may have. 
Healthcare organizations will need to consider the human and material resources necessary for development and implementation of intramural AI systems. (1) Data infrastructure and storage is complex and expensive. (2) Data labeling is a laborious process that currently requires significant resources for novel AI development. Standard labeling protocols as part of clinical care may help with this but compliance with these labels is often noisy, which may complicate AI training. (3) The training of AI systems and quality improvement takes time. Labeling errors, for example, can impede training. Built-in biases can affect external performance. (4) Incorporating feedback regarding errors in the system requires both time and material efforts from clinician and programmers, which need to be accounted for. In reality, many practices lack the resources (e.g., financial, time, expertise) to stay up-to-date on the large systems needed to successfully maintain and operate an independent AI platform. 
Data Testing and Quality Improvement
It remains to be seen how adopters of AI technology might play a role in refining and improving AI platforms. In health care, datasets used for training of AI platforms are often limited in some way, either by small numbers, biased demographics, or for example in the case of radiology, institutional scanning parameters. Thus, models trained in one context may fail to generalize to other contexts. In the same way that pharmaceutical companies are required to do post-marketing (phase IV) surveillance, the regulatory landscape for evaluating AI technologies in this manner post-approval remains ambiguous, however, there is a significant potential role for health care organizations to play here. The major limitations here are regulations regarding patient privacy and data sharing that will need to be addressed. Setting up a business associate agreement on data liability and ownership with a vendor requires an individual agreement with each specific vendor involved. High-quality data sampling is the best proxy for data sharing, but challenges exist for collecting representative datasets to predict for diverse populations. Data sampling methods may reduce the amount of stored and shared data needed to run models, but methods to monitor data quality will be needed to ensure accurate outcomes.6 
The Role of “Company Culture” in Embracing AI
Some organizations may value technological improvements such as AI more than others. AI is a potential paradigm shift for many aspects of health care delivery, and therefore systems will need to be adaptable to how the technology will disrupt the status quo. Organizations may question AI's value in their daily activities. Business leaders may lack understanding of how AI implementation can create value or have business goals that may not align with an AI implementation strategy. A recent Harvard Business Review article cites the integration of experts/business leaders and coders to build the AI system as the most commonly identified limitation to building a successful AI system.7 Consequently, building an AI implementation strategy that creates value for the organization requires a strategic approach, setting objectives, identifying key performance indices, and tracking return on investment. Notwithstanding, communicating that value to all staff members can be difficult if some feel that the value of AI does not align well with their goals. Understanding AI within the context of an organization's strategy takes time, planning, and clear communication that company culture needs to support. As a field, we lack clarity in terms of what a successful AI implementation looks like. The ophthalmic community should discuss and develop concrete objectives and key results that should be met to track successful implementation. Individual organizations may focus more on financial and clinical efficacy measures, but ophthalmic practices should also measure and share how new technology impacts our staff and our patients. 
The Challenge for Health Care Providers
The AI Learning Curve
Generally speaking, it is to the advantage of AI developers to make AI platforms user friendly.8 However, depending on the intended purpose of AI, there may be challenges integrating AI into daily clinical practice. Will physicians need to develop individual systems to coordinate AI results with EHR charting? With physicians already having varying levels of technology literacy, frustrations may be added as physicians learn how to incorporate and utilize AI platforms while struggling with existing technologies such as EHR. Furthermore, taking the time to understand how the AI algorithms operate may add more responsibilities that exacerbate physician burnout. For example, clinicians will need to consider the opportunity costs of utilizing AI technology to guide patient management versus seeing the patient in person. A world with autonomous AI clinical decision-making tools would likely have alert systems to advise the clinician of a problem. However, to minimize risk, these AI systems may be cautious in their approach to alert and err on the side of over referral. Depending on how this system is created, physicians may be at risk for alert fatigue. Consequently, communication systems between AI platforms and providers need to be thoughtfully designed. 
Physicians may also have concerns over bias built into AI technology. AI platforms are limited by the concept of “what goes in is what comes out,” meaning that the algorithm is only as good as the data source teaching it. Consequently, depending on the condition that the AI platform is intended to address, clinicians may have concerns that the platform does not consider racial, ethnic, gender, and other sociodemographic characteristics that may be important to consider, as has been seen in other domains.9 In the Framingham Heart study,10 nonwhite cardiovascular event risk predictions were slightly biased. Like any clinical decision making tool, including our own “clinical judgment,” clinicians need to learn to be conscious of hidden biases that may affect clinical decision making and outcomes. This could be particularly important in AI therapeutic models where the outcome tells the physician to inject but the physician disagrees. 
Building an AI Competent Physician Workforce
If the healthcare industry transitions to becoming AI reliant, then our training institutions have a responsibility to prepare current and future physicians and healthcare professionals to become AI competent. However, teaching approaches and the timing to add AI-related material into medical school curricula are unclear because medical schools do not know what a physician's job will look like in an AI world. This ambiguity creates uncertainty for what medical schools should embrace. Considering the current teaching environment, medical schools have different curricula around research and data science. Research and data skill sets will be important for future physicians to collaborate with AI developers to successfully implement new technologies, but what are the opportunity costs that we are willing to trade? An article by Paranjape and colleagues11 suggests that future physicians will need to develop knowledge of mathematical concepts, AI fundamentals, data science, and corresponding ethical and legal issues. However, current incentives for medical schools are not well aligned for building these skill sets, considering the already dense medical school curriculum and the limited teaching medical faculty who are AI competent and capable of teaching how to incorporate AI into clinical practice. AI competency should focus on outcome expectations and risk assessment with basic literacy being developed at the medical school level and further developed in clinical training. 
Challenges for Patients
Will Patients Embrace This Technology?
For AI technology to be successfully implemented, patients must consent to use this technology in their care. AI has the potential to change the paradigm for how health diagnoses and treatment recommendations are delivered to patients. Will patients be willing to accept a computer diagnosis versus one from a human because it saves time and money? In an autonomous diagnostic setting, will patients depend on nonexpert device operators for comfort and clarification? Coping with a diagnosis may be challenging before meeting with the provider to answer questions and explain the context or relevance of diagnosis to the patient. Above all else, humans can provide gentleness and compassion that machines cannot. 
Moreover, a limiting factor to patients embracing the use of this technology is the trust that data collection is safe and secure. Patients may distrust "impersonal" data collection software such as AI to hold diagnoses and treatment information. A recent survey in the United States on AI indicates that data privacy is considered the most important issue when thinking about this technology.12 How the ophthalmic community and healthcare industry implement AI may play a significant role in the patient's perceptions. For example, the implementation of AI at a physician's office versus a local drugstore may impact a patient's willingness to use this technology. There may be new concerns that AI platform developers will have access to patient data. Consequently, what trust and physician-patient confidentiality look like in an AI world merit consideration. 
Outlook for the Future
The purpose of this perspective was to sketch the barriers that need to be addressed for AI to become a success for healthcare organizations, providers, and patients. Within the realm of design, AI has been based upon maximally reducible characteristics aligned with the scientific knowledge of human clinician cognition, rather than proxy characteristics.13,14 With regard to appropriate data usage, AI creators now must collect data in compliance with regulations and legislation, as well as maximum traceability from the data pedigree, and steward the data accordingly.15,16 To maximize alignment among clinical workflow, evidence-based clinical standards of care, and practice patterns from the quality of care organizations, professional medical societies and patient organizations are expressing their views and establishing preferred practice patterns.17 In the realm of validation of safety, efficacy, and equity, reference standards are being validated against clinical outcomes or surrogate outcomes where appropriate to avoid subjectivity and intraobserver and interobserver issues with physicians and other human experts.13,16,18,19 Finally, there are great examples of progress with inclusion of AI systems in standards of care where appropriate validations of safety, efficacy, and equity exist such as the inclusion of autonomous AI within the Standard of Diabetes Care stewarded by the American Diabetes Association.20 
The initial integration woes of the AI system are solved by adding them to existing medical records systems through industry standards such as DICOM, HITECH, FHIR, and HL7.21 Lastly the assignment of liability or other protections is being defined based on the accountability principle for the autonomous AI output commensurate with indications.13 Last year, the AMA included in its AI policy that autonomous AI creators are responsible and liable if any harm should be caused by the diagnostic system they create.13,22,23 This is most pertinent for autonomous AI, where it would be ill-suited to expect a provider to be held liable for a diagnosis that is out of scope and comfort level from their usual practice and expertise. 
Conclusions
AI is poised to be a technology that dramatically shapes the future of ophthalmic practices in multiple ways. However, there will be both predictable and unforeseen challenges that arise with the implementation of AI in clinical medicine. In this article, we have discussed some challenges that the ophthalmic and healthcare communities need to consider as AI technology improves and becomes available for clinical use. Even if ophthalmic practices embrace AI technology, important unknown variables to successful AI adoption are the response of patients to this technology and the impact on the physician-patient relationship. As AI technology continues to be developed and adopted, we encourage continued collaboration among all stakeholders including providers, industry, and patients to best support each party's respective interests and to ensure optimal outcomes for our patients. 
Acknowledgments
Supported by Grants R01EY19474, K12EY27720, and P30EY10572 from the National Institutes of Health (Bethesda, MD), by grant SCH-1622679 from the National Science Foundation (Arlington, VA), and by unrestricted departmental funding and a Career Development Award from Research to Prevent Blindness for MFC and JPC. 
AAO Artificial Intelligence Task Force Members  
Michael F. Chiang, MD (Chair), Departments of Ophthalmology and Medical Informatics & Clinical Epidemiology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA. 
Michael D. Abràmoff, MD, PhD, Retina Service, Departments of Ophthalmology and Visual Sciences, Electrical and Computer Engineering, and Biomedical Engineering, University of Iowa, Iowa City, IA, USA; IDx, Coralville, IA, USA. 
J. Peter Campbell, MD, MPH, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA. 
Pearse A. Keane, MD, FRCOphth, Institute of Ophthalmology, University College London, UK; Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK. 
Aaron Y. Lee, MD, MSCI, Department of Ophthalmology, University of Washington, Seattle, WA, USA. 
Flora C. Lum, MD, American Academy of Ophthalmology, San Francisco, CA, USA. 
Michael X. Repka, MD, MBA, Wilmer Eye institute, Johns Hopkins University, Baltimore, MD, USA. 
Rishi P. Singh, MD, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA. 
Daniel Ting, MD, PhD, Singapore National Eye Center, Duke-NUS Medical School, Singapore, Singapore. 
Disclosure: R.P. Singh, Alcon (C), Genentech (C), Novartis (C), Apellis (F), Bayer (C), Carl Zeiss Meditec (C), Aerie (F), Graybug (F), Regeneron (C); G.L. Hom, None; M.D. Abramoff, Digital Technologies (I, F,E,P,S), Alimera (F); J.P. Campbell, Genentech (F); M.F. Chiang, National Institutes of Health (F), National Science Foundation (F), Genentech (F), Novartis (C), InTel Retina, LLC (I); P.A. Keane, DeepMind Technologies (C), Roche (C), Novartis (C), Apellis (C), Bayer (F), Allergan (F), Topcon (F), Heidelberg Engineering (F); A.Y. Lee, US FDA (E), Genentech (C), Topcon (C), Verana Health (C), Santen (F), Novartis (F), Carl Zeiss Meditec (F); M.X. Repka, None; D.S.W. Ting, EyRIS (IP), Novartis (C), Ocutrx (I, C), Optomed (C); F. Lum, None 
References
Would you trust an algorithm to diagnose an illness? - CNN, https://www.cnn.com/2019/07/15/business/artificial-intelligence-healthcare/index.html. Accessed May 15, 2020.
Google research shows how AI can make ophthalmologists more effective | EurekAlert! Science News. https://www.eurekalert.org/pub_releases/2019-03/aaoo-grs031819.php. Accessed May 15, 2020.
Lehne M, Sass J, Essenwanger A, Schepers J, Thun S. Why digital medicine depends on interoperability. npj Digit Med. 2019; 2: 1–5, doi:10.1038/s41746-019-0158-1. [CrossRef] [PubMed]
Holm EA. In defense of the black box. Science. 2019; 364: 26–27, doi:10.1126/science.aax0162. [CrossRef] [PubMed]
AI investment by country—survey | Deloitte Insights. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/ai-investment-by-country.html. Accessed May 4, 2020.
Woo M. An AI boost for clinical trials. Nature. 2019; 573: S100–S102, doi:10.1038/d41586-019-02871-3 [CrossRef] [PubMed]
Moldoveanu M . Why AI underperforms and what companies can do about it. https://hbr.org/2019/03/why-ai-underperforms-and-what-companies-can-do-about-it. Accessed May 4, 2020.
Lieberman H. User interface goals, AI opportunities. AI Mag. 2009; 30: 16–22, doi:10.1609/aimag.v30i4.2266. [CrossRef]
Angwin J, Larson J, Mattu S, Kirchner L. Machine Bias — ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed May 4, 2020.
Gijsberts CM, Groenewegen KA, Hoefer IE, et al. Race/ethnic differences in the associations of the Framingham risk factors with carotid IMT and cardiovascular events. PLoS One. 2015; 10, doi:10.1371/journal.pone.0132321.
Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ. 2019; 5: e16048, doi:10.2196/16048. [CrossRef] [PubMed]
Zhang B, Dafoe A. U.S. Public opinion on the governance of artificial intelligence. In: AIES 2020—Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: Association for Computing Machinery, Inc; 2020:187–193, doi:10.1145/3375627.3375827.
Abramoff MD, Tobey D, Char DS. Lessons learned about Autonomous AI: Finding a safe, efficacious, and ethical path through the development process. Am J Ophthalmol. 2020; 214: 134–142, doi:10.1016/j.ajo.2020.02.022. [CrossRef] [PubMed]
Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial attacks on medical machine learning. Science. 2019; 363: 1287–1289, doi:10.1126/science.aaw4399. [CrossRef] [PubMed]
Beauchamp TL, Childress JF. Principles of biomedical ethics. Eighth edition. Ed. New York: Oxford University Press; 2019.
Medical Association A. Augmented Intelligence in Health Care Policy Report. Augmented Intelligence (AI) in Health Care Annual Meeting 2018.
Abramoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Nature Digital Medicine. 2018; 1: 39, doi:10.1038/s41746-018-0040-6. [CrossRef]
Char DS, Abramoff MD, Feudtner C. Identifying potential ethical concerns in the conceptualization, development, implementation, and evaluation of machine learning healthcare applications. The American Journal of Bioethics. 2020 [in press].
Abramoff MD. The autonomous point of care diabetic retinopathy examination. In: Klonoff DC, Kerr D, Mulvaney SA, eds. Diabetes Digital Health. Amsterdam, Netherlands: Elsevier; 2020.
American Diabetes Association. 11. Microvascular complications and foot care: standards of medical care in diabetes—2020. Diabetes Care. 2020; 43(Suppl 1):S135–S151. [PubMed]
Blumenthal D . Launching HITECH. N Engl J Med. 2010; 362: 382–385, doi:10.1056/NEJMp0912825. [CrossRef] [PubMed]
Can an artificial intelligence algorithm be sued for malpractice? - STAT. https://www.statnews.com/2020/03/09/can-you-sue-artificial-intelligence-algorithm-for-malpractice/. Accessed May 15, 2020.
Price WN, Gerke S, Cohen IG. Potential Liability for Physicians Using Artificial Intelligence. JAMA. 2019; 322: 1765–1766, doi:10.1001/jama.2019.15064. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×