OR WAIT null SECS
Primary modeling studies that are well structured and easy to use are needed for clinicians to reliably predict progression of diabetic retinopathy, need for treatment, and visual loss.
The worldwide incidence of diabetes has been increasing markedly, and by 2014 it had increased to 422 million individuals from 108 million only 34 years previously. Along with that explosion in affected patients, both the numbers of patients with diabetic retinopathy (DR) detected as a result of improved technology and those who need treatment services also have increased.
Different countries handle management of these patients in different ways. For example, in the United States, annual screening is recommended for all patients with type 2 diabetes from the time of diagnosis; for those with type 1 diabetes annual screening is also recommended beginning 5 years after diagnosis.
In the UK, the diabetic eye screening program annually screens all patients over 12 years of age; patients also are managed in hospital eye services. Patients with low risk of DR remain within the diabetic screening program for annual surveillance; if their risk increases to sight-threatening DR, the hospital eye service provides frequent monitoring and treatment.
However, in both cases, the demand for treatment services is huge, and because of that, prognostic prediction models of DR are being used to optimize services. However, as one group of investigators pointed out, these models were intended initially to detect sight-threatening DR and are used mostly in DR screening services.
In light of this, Sajjad Haider, MBBS, FRCS, FRCOPHTH, MSc, and colleagues conducted a review study “…to summarise the characteristics and performance of existing models in predicting progression of retinopathy and their applicability for higher-risk DR patients under hospital care to predict need for treatment or loss of vision.”
The investigators who are from the Institute of Applied Health Research, University of Birmingham, Birmingham, UK, and York Teaching Hospital NHS Foundation Trust, York, UK, reported their findings in Eye 2019;33:702-13.
They searched MEDLINE, EMBASE, and COCHRANE CENTRAL, abstracts from conferences, and references lists for studies related to diabetes, DR, and prognostic models and ultimately identified 22 articles that reported on 14 prognostic models that included four updates of the models that met their study criteria.
“Six models had both internal and external validation, five models performed only internal validation, and two were only validated in external datasets. One model lacked both internal and external validation. No studies assessing the impact of a model were identified. All studies were conducted during the last two decades,” the investigators reported.
The study sample sizes were large, ranging from 1,441 to 454,575 patients in the studies of primary development and from 200 to 206,050 in validation studies. The models included 78 different candidate predictors, of which the biochemical predictors were the most common.
“Hemoglobin A1c [HbA1C] was the most common predictor followed by the duration of diabetes, and three forms of age (age, age at diabetes diagnosis, and age at DR diagnosis). Eleven models used local predictors/ocular signs, and one model used only ocular signs of prediction. The baseline DR was categorized as R0 in eyes, R1 in on eye, or R1 in both eyes.
The studies ranged from low to high risk of bias, with most having a high risk of bias and doubtful applicability; most models focused on patients with lower risk, the authors noted.
The authors identified three models1-3 with some applicability for patients at higher risk. These there models had moderate to low risk of bias and low risk for applicability and have already shown some impact in diabetic screening in lower risk patients, the authors pointed out.
The three models “…has clearly shown that individual patient’s risk assessment and prediction can be safely and effectively achieved through the use of routine data” in patients with sight-threatening retinopathy. The investigators believe that one of these models can be updated and tested with higher risk hospital patients.
Within those three models, there were 11 different types of final predictors, with the duration of diabetes and HbA1c in all three; two of the models used systolic blood pressure.
“Other predictors included in these three models were presence [of DR], grade of diabetic retinopathy, presence of background diabetic retinopathy in one or both eyes, gender, type of diabetes, age at diagnosis, and total serum cholesterol,” they reported.
Based on their findings in this review study, the authors emphasized the need for a model that can determine patients’ individual risk of progression to treatment stage/loss of vision. This knowledge will allow for more appropriate use of resources and further optimization of services, especially for patients with a higher risk of progression.
“Scanlon et al, Aspelund et al., and the ISDR model seem to be appropriate in terms of contemporary participant data, assessable predictors, and sound methodology, though they do not directly address the outcome of our interest. They need further external validation in diverse high-risk settings before being implemented into clinical practice,” they concluded.
The investigators had no financial interest in any aspect of this report.
1. Eleuteri A, Fisher AC, Broadbent DM, et al. Individualised variable-internal risk-based screening for sight-threatening diabetic retinopathy: the Liverpool Risk Calculation Engine. Diabetologia. 2017;60:2174-2182.
2. Aspelund T, Porisdottir O, Ofalsottir E, et al. Individual risk assessment and information technology to optimize screening frequency for diabetic retinopathy. Diabetologia. 2011;54: 2525-2532.
3. Scanlon PH, Aldington SJ, Leal J, et al. Development of a cost-effectiveness model of optimization of the screening internal in diabetic retinopathy screening. Health Technol Assess. 2015;19:1-116.