Background
A primary cause of vision loss in the aging population, age-related macular degeneration (AMD) is a progressive retinal condition characterized by the presence of drusen, accumulated deposits of extracellular material between the ocular membranes. Current AMD studies largely rely on fundus photography to identify visible drusen as a late-stage indicator of disease progression.
Fluorescence lifetime imaging ophthalmoscopy (FLIO) has been posited as a novel source of diagnostic and research data. Through recording changes in fundus autofluorescence, FLIO data indicates the presence of AMD-related biochemical components and processes. FLIO data analysis has the potential to identify early stages of AMD development.
Goals
- Preprocess fundus and FLIO data into appropriate formats and similar sizes
- Create paired autoencoder model with constrained feature spaces to take data from these two sources; output single image capturing information from both sources
- Test different model architectures and loss functions to optimize results
Methods
Preprocessing
Matched FLIO and fundus data was obtained for 37 AMD subjects, provided by Martin Hammer at the University Hospital Jena Department of Ophthalmology.
- Fundus
- FLIO
For the purpose of training initial weights, additional fundus data (approximately 90k images) was gathered online from sources including the Retinal Images for Glaucoma Analysis (RIGA), Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the REtina (STARE), Automated Retinal Image Analyzer (ARIA), and Kaggle Diabetic Retinopathy competition. Images were cropped into fovea-centered squares and resized. No additional FLIO data could be obtained.
Utilizing a paired dataset comprising both imaging modalities, we generated fundus-like RGB images that qualitatively highlight the fluorescence lifetime component differences found in the matched FLIO data.
- Traditional autoencoder for image reconstruction
- Paired autoencoders joined at the feature space with different architectures and loss functions
- Visualization
We implemented paired autoencoders for fundus images and FLIO triexponential decay parameters; the autoencoders learned via individual L1-regularized MS-SSIM loss functions and a cosine similarity loss linking the feature spaces.
Current results comprise decoded fundus images constrained by FLIO data that fuse information provided by both modalities, but yield suboptimal resolution and interpretation clarity. Loss experimentation with a SSIM-variant loss or perceptual loss instead of cosine similarity would provide basis for future improvement.
With further refinement, this approach could eventually identify ambiguous or early AMD presentation before pathology is clear to the human eye, leading to timely diagnosis and intervention.
Packages and Technologies
- SimpleITK
- PyTorch
- NYU High Performance Computing Cluster