i-Sight: Into the future of Opthalmic AI
i-Sight incorporates labeled and unlabeled data to create holistic diagnoses of the eye. This attempt focuses primarily on the retinal Fundus and OCT images. i-Sight is the winning project of the Leidos 2022 Young Virginia STEM Research Award.
View source code on Github: i-Sight. For more information view the official site i-Sight.
i-Sight is the first network to create ophthalmic diagnoses by using multiple points of data – fundus images, optical coherence tomography (OCT) images, and patient data. By utilizing a modified EfficientNet backbone, a bi-directional feature pyramid network (BiFPN), and a pyramid scene parsing network (PSP), i-Sight can localize lesions, segment landmarks, and retinal layers to create a holistic diagnosis on the health of a patient’s retina. However, while there is ample raw data, there is a lack of labeled data available in the field for this purpose. Therefore, the novel semi-supervised learning (SSL) artificial intelligence (AI) training technique, Advanced Meta Pseudo Labels (AMPL), was created to train i-Sight on both labeled and unlabeled data. On tests, i-Sight has the highest mean intersection over union (mIOU) of 0.92 on the REFUGE dataset for glaucoma detection; a specificity 0.98 and sensitivity of 0.95 on the MESSIDOR2 dataset for diabetic retinopathy detection; a specificity 0.90 and sensitivity of 0.87 on the ADAM dataset for age-related macular degeneration detection; and a specificity of 1.00 and a sensitivity of 1.00 on the Data on OCT and Fundus Images Dataset. By leveraging unlabeled raw data through an SSL training method, i-Sight achieves greater performance over other state of the art (SOTA) supervised training methods, making it the most accurate compound neural network for holistic ophthalmic diagnosis, and the first pseudo label based method for retinal image analysis. i-Sight aims to improve access to eyecare by reducing eye screening costs to detect conditions early, circumventing preventable blindness.