Worldwide, glaucoma and age-related macular degeneration (AMD) cause 12.3% and 8.7% of the cases of blindness and/or vision loss, respectively. According to a 5-year study of Medicare beneficiaries, patients who undergo a regular eye screening, experience less decline of vision than those who had less-frequent examinations. A computer-based screening of retinopathies can be highly cost-effective and efficient; however, most auto-screening software address only one eye disease, limiting their clinical utility and cost-effectiveness. Therefore, we propose a computer-based retinopathy screening system for detection of AMD and glaucoma by integrating information from retinal fundus images and clinical data. First, the retinal image analysis algorithms were developed using Transfer Learning approach to determine presence or absence of the eye disease. The clinical data was then utilized to improve disease detection performance where the image-analysis based algorithms provided sub-optimal classification. The results for binary detection (present/absent) of AMD and Glaucoma were compared with the ground truth provided by a certified retinal reader. We applied the proposed method to a dataset of 304 retinal images with AMD, 299 retinal images with Glaucoma, and 2,341 control retinal images. The algorithms demonstrated sensitivity/specificity of 100%/99.5% for detection of any AMD, 82%/70% for detection of referable AMD, and 75%/81% for detection of referable Glaucoma. The automated detection results agree well with the ground truth suggesting its potential in screening for AMD and Glaucoma.
Adequate image quality is a necessary component to any retinal screening program whether the cases are to be read by a human reader or processed by an artificial intelligence system (AI). The need for expanded screening for retinal diseases has led to the adoption of low-cost, portable cameras that are ideal for reaching large underserved populations. However, the low-cost cameras generally require a higher level of operator skill to produce high quality images in comparison with more expensive table-top retinal cameras. This study, conducted at thirteen clinics in Monterrey, Mexico, compares unreadable rates between a table-top retinal camera (Canon CR2-AF) and a low-cost portable camera (Volk Pictor Plus) before and after implementation of automatic image quality assessment software, Image Quality Analyzer (IQA). The software determines if an image is of adequate quality to be read by a human or AI system; what the image quality issues are; and tips for fixing the issues. The process is performed in real time. Results show a significant decrease in unreadable cases (9% to 0%) for the Pictor Plus after IQA implementation bringing the percent of rejected cases in line with the table-top camera (3% to 5%).
Diabetic Retinopathy (DR)1, 2 is a leading cause of blindness worldwide and is estimated to threaten the vision of nearly 200 million by 2030.3 To work with the ever-increasing population, the use of image processing algorithms to screen for those at risk has been on the rise. Research-oriented solutions have proven effective in classifying images with or without DR, but often fail to address the true need of the clinic - referring only those who need to be seen by a specialist, and reading every single case. In this work, we leverage an array of image pre-preprocessing techniques, as well as Transfer Learning to re-purpose an existing deep network for our tasks in DR. We train, test, and validate our system on 979 clinical cases, achieving a 95% Area Under the Curve (AUC) for referring Severe DR with an equal error Sensitivity and Specificity of 90%. Our system does not reject any images based on their quality, and is agnostic in terms of eye side and field. These results show that general purpose classifiers can, with the right type of input, have a major impact in clinical environments or for teams lacking access to large volumes of data or high-throughput supercomputers.
Retinal abnormalities associated with hypertensive retinopathy are useful in assessing the risk of cardiovascular disease, heart failure, and stroke. Assessing these risks as part of primary care can lead to a decrease in the incidence of cardiovascular disease-related deaths. Primary care is a resource limited setting where low cost retinal cameras may bring needed help without compromising care. We compared a low-cost handheld retinal camera to a traditional table top retinal camera on their optical characteristics and performance to detect hypertensive retinopathy. A retrospective dataset of N=40 subjects (28 with hypertensive retinopathy, 12 controls) was used from a clinical study conducted at a primary care clinic in Texas. Non-mydriatic retinal fundus images were acquired using a Pictor Plus hand held camera (Volk Optical Inc.) and a Canon CR1-Mark II tabletop camera (Canon USA) during the same encounter. The images from each camera were graded by a licensed optometrist according to the universally accepted Keith-Wagener-Barker Hypertensive Retinopathy Classification System, three weeks apart to minimize memory bias. The sensitivity of the hand-held camera to detect any level of hypertensive retinopathy was 86% compared to the Canon. Insufficient photographer’s skills produced 70% of the false negative cases. The other 30% were due to the handheld camera’s insufficient spatial resolution to resolve the vascular changes such as minor A/V nicking and copper wiring, but these were associated with non-referable disease. Physician evaluation of the performance of the handheld camera indicates it is sufficient to provide high risk patients with adequate follow up and management.
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm’s performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on
partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.