Retinal imaging tests are providing material to train and test decision support systems.
The opportunities for patients and healthcare professionals alike are great, says retina expert Konstantinos Balaskas, MB BS, MD, FEBO, MRCOPHTH, in this interview with Caroline Richards, editor of sister publication Ophthalmology Times Europe®. However, hurdles remain before AI can be fully integrated in real life: economic, ethical and data-privacy issues to name but a few.
Well, I think AI is quite a broad term. The type of AI that has generated a lot of excitement in recent years is called ‘deep learning’. This is a process by which software programs learn to perform certain tasks by processing large quantities of data.
Deep learning is what has made ophthalmology a pioneer in the field of implementing AI in medicine, because we are increasingly reliant on imaging tests to monitor our patients. Particularly in my subspecialty of interest, medical retina, imaging tests such as optical coherence tomography (OCT) are performed very frequently and have provided the material to train and test and then apply AI decision support systems.
In retina particularly, some of the most common causes of visual loss in the Western world, such as age-related macular degeneration (AMD) and diabetic retinopathy, are conditions that require early detection, prompt initiation of treatment and regular monitoring in order to preserve vision. And that is where AI decision support systems can help to improve access to care and ensure optimal clinical outcomes for our patients.
An example is the AI decision support system that was developed in a collaboration between Moorfields Eye Hospital, where I am based, and Google DeepMind. It is able to read OCT scans, interpret them, provide a diagnosis and make management recommendations. The other area where AI shows promise is in the development of personalized treatment plans for patients by being able to predict their response to treatment and their visual outcomes over a period of time.
Referring again to those common conditions that threaten vision, such as AMD and diabetic retinopathy, AI decision support tools, once validated and once they have gained regulatory approval as medical devices, can help improve access to care. They can, for example, assist health practitioners in the community in diagnosing diseases early.
In the United Kingdom, where OCT scans are widely available in high street optician practices, an AI tool would be particularly useful to assist them to interpret scans correctly and identify disease at an early stage. Similarly, in diabetic retinopathy, where patients require regular screening and monitoring of their disease, AI tools can significantly increase the efficiency of the screening programs.
Some such applications already exist, particularly in diabetic retinopathy – they can be of particular use for diabetic retinopathy screening programs, such as in under-resourced healthcare settings. Other indications for the application of AI monitoring, like AMD, are in advanced stages of development, but have not yet been implemented in real life.
Definitely, there are quite a few. I have a personal academic interest in the field called implementation science, which looks at the gap between developing a medical device such as an AI decision support tool and actually deploying it in real-life clinical practice.1
The potential barriers that we need to overcome for the tool to be deployed in a meaningful way to improve outcomes for our patients go beyond testing and validation. These include economic evaluations: how would such an automated decision support model affect the finances of a healthcare system, so that it could provide good value for money or achieve cost savings?
The next consideration is human factors, particularly how these models of care that rely on AI are perceived and accepted by patients and practitioners; what is the level of trust in these technologies? And what level of information and education of patients and the general public is required to build confidence in their use? Then there are considerations around training and the technical infrastructure to support these tools.
Ethical and data-privacy issues, as well as medico-legal considerations, are also important: who is responsible for decisions made by an AI algorithm rather than a human? How do these tools affect the way healthcare professionals diagnose and manage disease?
There is a phenomenon called automation bias, where practitioners are sometimes more likely to defer to the recommendation of the AI tool, even perhaps against their better judgment. And there is the issue of interpretability—the ‘black box’ phenomenon—in many instances, these AI tools are opaque in their functioning.
We do not fully understand how a specific recommendation is reached, whether that is a diagnosis or a management recommendation, and that lack of transparency can exacerbate the medical, legal and ethical issues that were mentioned earlier. In summary, there are several hurdles to overcome before AI tools can be deployed in real life in a way that is safe and will improve clinical outcomes.
This is an interesting question. I share a positive optimistic vision of AI in medical practice. Our field is becoming increasingly complex and we need to process data from various sources when we are assessing our patients: data from the many imaging modalities, genetic data and the various types of omics, such as proteomics and the emerging field of oculomics, where features on the eye examination can be indicative of problems with systemic health. Even data from home vision monitoring devices will become increasingly available.
Processing and making sense of all this data in order to develop a personalized treatment plan for each individual patient can be a daunting task. AI could become a very useful aid and, as described in the Topol review on AI commissioned by Health Education England, provide the gift of time to patients and practitioners, giving them the chance to discuss and decide together what the optimal treatment plan is, informed by the processing of high-dimensional complex data sources.