AI makes retinal imaging 100 times faster, compared to manual method
Key Takeaways
- Applying artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye makes imaging 100 times faster and improves image contrast 3.5-fold.
- According to the authors, the advance will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.
Johnny Tam, Ph.D., leader of the Clinical and Translational Imaging Section at the National Institutes of Health’s National Eye Institute is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics. “Adaptive optics takes OCT-based imaging to the next level,” said Tam. “It’s like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of disease.” While adding AO to OCT provides a much better view of cells, processing AO-OCT images after they’ve been captured takes much longer than OCT without AO.
Tam’s latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the retina that supports the metabolically active retinal neurons, including the light-sensing photoreceptors. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down. Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. To manage speckle, researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that’s speckle-free. Tam and his team have developed a novel AI-based method called parallel discriminator generative adverbial network (P-GAN), a deep learning algorithm, to address this problem. By feeding the P-GAN AI nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features. When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With only one image capture, it generated results comparable to the manual method, which required the capture and averaging of 120 images. The results were published in Communications Medicine.
Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before. By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.
Edited by Dawn Wilcox, BSN, RN and Miriam Kaplan, PhD
Source: National Institutes of Health News Release, April 10, 2024; see source article