Reviewed by Kate Anderton, B.Sc. (Editor) Mar 4 2020
An artificial intelligence (AI) device that has been fast-tracked for approval by the Food and Drug Administration may help identify newborns at risk for aggressive posterior retinopathy of prematurity (AP-ROP). AP-ROP is the most severe form of ROP and can be difficult to diagnose in time to save vision. The findings of the National Eye Institute-funded study published online February 7 in Ophthalmology .
Artificial intelligence has the potential to help us recognize babies with AP-ROP earlier. But it also provides the foundation for quantitative metrics to help us better understand AP-ROP pathophysiology, which is key for improving how we manage it." J. Peter Campbell, M.D., M.P.H., study's lead investigator, Casey Eye Institute, Oregon Health and Science University in Portland
Babies born prematurely are at risk for retinopathy. That is, they have fragile vessels in their eyes, which can leak blood and grow abnormally. If left untreated, vessel growth can worsen and cause scarring, which can pull on and cause detachment of the retina, the light-sensing tissue at the back of the eye. Retinal detachment is the main cause of vision loss from ROP. Each year, the incidence of ROP in the United States is approximately 0.17%. Most cases are mild and resolve without treatment.
Upon birth, the eyes of preemies are screened and closely watched for signs of retinopathy. But ROP-related changes occur along a spectrum of severity. AP-ROP can elude diagnosis because its features can be more subtle and harder to appreciate than typical ROP. AP-ROP was formally recognized as a diagnostic entity in 2005. Yet in everyday practice there's significant variation in how clinicians interpret whether fundus images taken of the inside of the eye show signs of AP-ROP. "Even the most highly experienced evaluators have been known to disagree about whether fundus images indicate AP-ROP," said Campbell.
In a previous study, deep learning, a type of AI used for image recognition, was more accurate than experts at detecting subtle patterns in fundus images and at classifying ROP. Using the automated deep learning ROP classifier, researchers devised a quantitative vascular severity score (1-9 scale) for evaluating newborns, monitoring disease progression and response to treatment. The study, however, did not specifically address AP-ROP detection.
For the current study, nine neonatal care centers used deep learning to determine how well it detected AP-ROP. The 947 newborns in the study were followed over time and fundus images from a total of 5945 eye examinations were analyzed both by the deep learning system and a team of expert fundus image graders. Related Stories
Also in Industry News
How to decide whether or not to start treatment for prostate cancer?
Analysis of the SARS-CoV-2 proteome via visual tools
$65m investment increases British Patient Capital’s exposure to life sciences and health technology