Can AI explain AMD to patients?

News
Article
Modern Retina Digital EditionModern Retina July and August 2025
Volume 5
Issue 3

Patients weigh in on the value of provider communication.

Patient education remains a critical yet challenging aspect of managing age-related macular degeneration (AMD). This progressive retinal disease affects approximately 200 million individuals globally, with incidence projected to rise to nearly 288 million by 2040.1 AMD manifests in 2 forms: dry (atrophic) and wet (neovascular). Furthermore, it is important to note that these conditions are not mutually exclusive; many patients present with dry AMD in one or both eyes and develop wet AMD concurrently.2 The coexistence of the antonyms dry and wet is not supported by the English language. This overlap often generates confusion and highlights the need for clear, accessible education about the underlying pathology, disease course, and potential coexistence of both forms.

Effective patient education is crucial yet challenging with AMD. Most patients are older and have low vision, which complicates traditional approaches, such as printed brochures. In fact, central vision loss from AMD creates an added barrier to health literacy​.3 Paradoxically, providing written materials to patients with retinal conditions may be of limited use if they are unable to read them easily due to visual impairment. Moreover, the complexity of medical information often exceeds the reading levels of patients. One analysis found online AMD patient education materials require, on average, a ninth-grade reading level, which is far above the sixth-grade level recommended for health materials​.4 This discrepancy not only limits comprehension but also risks widening the gap between diagnosis and understanding.

In a busy clinic, the educational component is often shortened. As physicians are expected to diagnose, educate, and deliver treatment within the allotted 10 minutes of face time with the patient, it becomes challenging to parse out what the patient comprehends.

The emergence of large language models, such as ChatGPT (OpenAI), offers new avenues for generating simplified medical explanations. Findings from recent studies in ophthalmology suggest that these models have the potential to provide accurate, layperson-friendly descriptions of ophthalmic diseases.5 However, it remains unclear how patients perceive and understand artificial intelligence (AI)–generated content, especially compared with explanations from their physician. We aimed to explore this by gathering patient feedback on an AI-generated explanation as opposed to a physician’s explanation of dry and wet AMD and their coexistence.

Methods

We conducted a qualitative pilot study involving patients under active management for wet AMD. Before this study, both patients were diagnosed with concurrent dry and wet AMD.

Patients were intentionally selected to reflect differing levels of formal education and health literacy: One patient had limited educational attainment and lower health literacy, whereas the other possessed a doctoral degree (PhD) and demonstrated higher health literacy. At the start of their scheduled clinic visits, a vitreoretinal specialist gave each patient a verbal explanation of their disease. This explanation addressed the fundamental differences between dry and wet AMD, including etiology, pathological mechanisms, and prognostic implications.

Immediately following the physician-delivered explanation, each patient was presented with a ChatGPT-generated explanation. The model was explicitly prompted to generate a 1-page summary describing the causes, pathology, progression, and coexistence of dry and wet AMD, using plain, layperson-friendly language—specifically at a level appropriate for the average patient with retinal issues—avoiding all complex medical jargon and technical terminology. To address potential barriers related to visual impairment, the AI-generated text was read aloud to each patient. This method ensured comparability to the verbal physician interaction and circumvented the need for print literacy.

After hearing the AI explanation, patients participated in semistructured interviews, which elicited their impressions of both explanations. Questions probed comprehension, perceived credibility, emotional resonance, and overall preference for human vs AI educational interactions.

Results

Both patients successfully completed the study and provided in-depth, qualitative feedback, highlighting the distinct strengths and limitations of AI-generated education compared with physician explanations.

The patient with lower educational attainment and limited health literacy found the AI-generated explanation to be “fairly clear” and appreciated its logical structure, noting that it delineated the differences between dry and wet AMD and described the possibility of coexistence. However, when asked to compare the 2 experiences directly, he strongly preferred the explanation delivered by the physician. He emphasized the irreplaceable value of human connection, trust building, and personalized engagement, stating, “It’s not just what is said but who’s saying it.” The importance of seeing and interacting with a physician, particularly when facing procedures as intimidating as intravitreal injections, was crucial in establishing confidence in his care.

While this patient critiqued the AI-generated explanation for being overly dense, his partner provided additional insights, suggesting that the explanation should be condensed into a single page to better serve older patients. She also proposed adding a frequently asked questions (FAQ) section to address the common follow-up questions patients often have after reading complex material. As a potential compromise, his partner advocated for an intermediate solution between a purely text-based AI explanation and direct physician communication, such as an AI-generated educational video, to introduce a degree of interactivity while maintaining efficiency. All in all, both the patient and his partner strongly emphasized their preference for explanations delivered by other people.

In contrast, the second patient, who possessed a PhD and exhibited high health literacy, responded more favorably to the AI-generated explanation. He praised its depth, thoroughness, and methodical structure, noting that it provided more comprehensive background information than the physician’s initial verbal summary. He appreciated the ability to absorb the material at a relaxed pace, without feeling rushed by the time constraints often inherent to clinical visits. Throughout the discussion, his wife, who was also undergoing treatment for wet AMD, affirmed and elaborated on his observations. She agreed that the AI explanation offered a detailed and useful foundation but strongly voiced concerns about the potential for AI to be overused, particularly in sensitive medical contexts. Although she acknowledged the value of AI as a supplemental tool, she emphasized the importance of preserving human interaction as the primary mode of patient education and care.

Nonetheless, he too recognized limitations in the AI approach. Although he was able to ask clarifying questions during the live reading, he noted that an AI-only interaction would lack the dynamic, responsive dialogue necessary for optimal understanding. He envisioned AI as a valuable adjunct, providing detailed background material that could be reviewed independently, but firmly asserted that human interaction must remain central to medical education and decision-making processes.

In sum, across both cases, a common theme emerged: Although AI could enhance informational depth and consistency, the human element—empathy, trust, and real-time engagement—remained indispensable to patient education and satisfaction.

Discussion

Our preliminary findings reveal important insights into the evolving role of AI in patient education for managing retinal diseases.

First, patient health literacy and educational background significantly influenced patients’ receptiveness to AI-generated content. The patient with lower health literacy valued brevity, interpersonal connection, and real-time interaction, whereas the highly educated patient appreciated depth, structure, and supplementary information. This dichotomy reinforces the broader principle that patient education must be individualized, with content adapted based on the patient’s health literacy, educational level, and cognitive or emotional needs.6 Moreover, these results highlight the need for a more tailored approach to patient education, suggesting that future studies should prospectively stratify patients based on their education and health literacy levels to evaluate the effectiveness of AI-generated content across different patient subpopulations.

Second, the mode of information delivery remains paramount. Both patients, despite different educational backgrounds, underscored the critical importance of human presence in the educational process. Trust, empathy, and the ability to address immediate questions were consistently cited as indispensable attributes that AI cannot replicate on its own. These observations align with existing literature suggesting that perceived empathy strongly influences patient satisfaction and treatment adherence.7

Third, findings from this study highlighted the irony and limitations of using written educational materials in a visually impaired population. Even when AI-generated content is accessible in language, its visual format inherently disadvantages many patients with AMD.8 Future strategies should prioritize alternative delivery methods such as large-print documents, audio recordings, or interactive voice-based platforms.

Finally, although the AI-generated content was prompted to be concise and friendly to laypersons, the resultant output was relatively dense and frequently required real-time clarification from a staff member during its delivery. The patients consistently asked questions as the explanation was read aloud, highlighting that, despite simplified language, the material was not independently self-explanatory for this patient population. One likely explanation is a limitation of how large language models such as ChatGPT interpret and operationally define “layperson-friendly” content. Although our prompt explicitly requested language appropriate for an average patient with retinal issues, the content may have assumed a level of familiarity with biological and disease-related concepts that exceeded the comfort level of most patients, particularly older adults or those with limited formal education and lower health literacy.

This observation suggests that clinicians using AI in patient education must anticipate the need for active support and clarification during the educational encounter rather than assume that AI-generated explanations alone can provide optimal patient understanding. A prospective clinical trial should expand this investigation to larger, more diverse patient populations and explore variations in AI content formatting, such as bullet-point FAQs vs narrative explanations, to determine optimal strategies for different educational profiles. Such a study should also examine how prompt structure influences output complexity and consider integrating brief intake tools to help tailor educational content to each patient’s literacy level. Furthermore, it could be expanded to other common retinal pathologies.

Conclusion

AI-generated explanations of dry and wet AMD are a promising innovation in patient education but must be deployed judiciously. Although AI can enhance informational depth and consistency, human connection remains indispensable. Tailoring educational materials to individual literacy levels, ensuring accessibility for patients with low vision, and maintaining physician involvement will be essential for maximizing patient understanding, trust, and rapport. Ultimately, the integration of AI into retina practice should seek not to replace the physician-patient dialogue but to enrich and reinforce it, empowering patients with knowledge while sustaining the empathetic relationships at the heart of medical care.

References
  1. Wong WL, Su X, Li X, et al. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014;2(2):e106-e116. doi:10.1016/s2214-109x(13)70145-1
  2. Types and stages of macular degeneration. American Macular Degeneration Foundation. 2025. Accessed April 30, 2025. https://www.macular.org/about-macular-degeneration/what-is-macular-degeneration/types
  3. Fortuna J, Riddering A, Shuster L, Lopez-Jeng C. Assessment of modified patient education materials for people with age-related macular degeneration. Open J Occup Ther. 2021;9(2):1-17. doi:10.15453/2168-6408.1787
  4. Cohen S, Brant A, Rayess N, et al. A Google Trends assisted analysis of the readability, accountability, and accessibility of online patient education materials for the treatment of age-related macular degeneration after FDA approval of pegcetacoplan (SYFOVRE). Invest Ophthalmol Vis Sci. 2024;65(7):1367. https://iovsarvojournals.org/article.aspx?articleid=2793930
  5. Desideri LF, Roth J, Zinkernagel M, Anguita R. Application and accuracy of artificial intelligence-derived large language models in patients with age related macular degeneration. Int J Retina Vitreous. 2023;9(1):71. doi:10.1186/s40942-023-00511-7
  6. Rosdahl JA, Swamy L, Stinnett S, Muir KW. Patient education preferences in ophthalmic care. Patient Prefer Adherence. 2014;8:565-574. doi:10.2147/ppa.s61505
  7. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237-251. doi:10.1177/0163278704267037
  8. Binns AM, Bunce C, Dickinson C, et al. How effective is low vision service provision? a systematic review. Surv Ophthalmol. 2012;57(1):34-65. doi:10.1016/j.survophthal.2011.06.006
Articles in this issue

Newsletter

Keep your retina practice on the forefront—subscribe for expert analysis and emerging trends in retinal disease management.

Recent Videos
At ARVO 2025, Christine Kay, MD, shares interim results from the PRISM trial population extension cohort
© 2025 MJH Life Sciences

All rights reserved.