While the COVID-19 pandemic revealed the potential of utilizing artificial intelligence in screening for diabetic retinopathy, improvements are still needed as a results of the number of images that are ungradable.
Reviewed by Eric Kuklinski.
The value of artificial intelligence (AI) is increasing as the technology continues to develop, and the need for AI and teleophthalmology was on full display during the COVID-19 pandemic when in-person evaluations became next to impossible, according to Eric Kuklinski, a medical student, and colleagues at the Department of Ophthalmology and Visual Science, Rutgers New Jersey Medical School, Newark.
According to investigators, the pandemic exposed the need for increased mobilization of tele-ophthalmology and AI resources as they can potentially provide screening for diabetic retinopathy (DR) and continuity of care for patients already diagnosed with DR.
A great advantage of AI is the technology’s ability to assess numerous images in a short period of time and determine which patients need emergent care.
Kuklinski and colleagues conducted a retrospective study to assess the ability of AI software to detect DR compared with teleophthalmology and in-person examinations conducted by a retina specialist.
Eighty eyes of 40 patients (male, 45%; average age, 55.1 ± 10.9 years) were included in the analysis. Most of the patients (60%) were Hispanic, and most (65%) had a history of type 2 diabetes. The investigators evaluated the patient demographics, retinal photos obtained using the Canon CR-2 Plus AF Retinal Imaging camera, and the diagnosis of DR based on the International Clinical Diabetic Retinopathy (ICDR) classification scale during an in-person clinic visit when a fundus examination was performed by a retina specialist.
AI software (EyeArt, EyeNuk) was used to grade the retinal photos. The eyes were classified as normal or having mild or more-than-mild DR. In addition, a retina specialist graded the retinal images remotely using the ICDR classification scale using TeamViewer software (teleophthalmology). The investigators then assessed the agreement between teleophthalmology, AI, and in-person diagnosis of DR.
A total of 33 eyes were diagnosed with no DR during an in-person evaluation, 5 with mild non-proliferative DR (NPDR), 9 with moderate NPDR, 3 with severe NPDR, 7 with proliferative diabetic retinopathy (PDR), and 23 with regressed PDR, the investigators reported.
Eleven eyes were ungradable via tele-ophthalmology and AI could not grade 26 eyes. Comparison of agreeability (using Cohen’s kappa coefficient) between the in-person diagnosis and tele-ophthalmology was 0.859±0.058 (p < 0.001), between the in-person diagnosis and AI 0.751±0.082 (p < 0.001), and between tele-ophthalmology and AI 0.883±0.063 (p < 0.001).
Kuklinski and colleagues also reported that AI showed a substantial rate of agreement with the in-person diagnosis and near perfect agreement with tele-ophthalmology for detecting the presence of DR in this patient group.
The analysis also showed that tele-ophthalmology grading was in near perfect agreement with the in-person diagnosis, which indicated that tele-ophthalmology is a reliable tool to use to remotely screen patients that may be ungradable by AI.
The investigators offered the caveat that improvements are clearly needed because of the high number of images that are ungradable via tele-ophthalmology and AI. “Further studies should assess ways to reduce the number of ungradable images and create a trend analysis for multiple visits for a given patient,” they stated.