A convolutional neural network (CNN) with multimodal fusion options was developed for artery-vein (AV) segmentation in OCT angiography (OCTA). We quantitatively evaluated multimodal architectures with early and late OCT-OCTA fusions, compared to the unimodal architectures with OCT-only and OCTA-only inputs. OCT-only architecture is limited for segmentation of large AV branches. The OCTA-only architecture, early OCT-OCTA fusion architecture, and late OCT-OCTA fusion architecture provide competitive performances for AV segmentation with further details. Compared to OCTA-only architecture, the late fusion architecture is slightly better, while the early fusion architecture is slightly worse.
Early detection of diabetic retinopathy (DR) is an essential step to prevent vision losses. This study is to conduct comparative optical coherence tomography (OCT) and OCT-Angiography (OCTA) analysis, and to identify quantitative features for robust detection of early DR. Five quantitative OCT features were derived to analyze the outer retinal band intensity in the central fovea, parafovea and perifovea regions. Similarly, eight quantitative OCTA features were established to analyze the superficial and deep vascular plexuses. OCT and OCTA images of 21 eyes from healthy control subjects, 20 eyes from diabetic patients without retinopathy (NoDR), and 21 eyes from mild DR patients were used for this study. Comparative analysis revealed that quantitative OCT features related to the Inner Segment ellipsoid (ISe) has the best sensitivity for objective differentiation of all cohorts.
KEYWORDS: Optical coherence tomography, Angiography, Veins, Arteries, RGB color model, Network architectures, Near infrared, Image segmentation, Eye, Control systems
Early disease diagnosis and effective treatment assessment are crucial to prevent vision loss. Retinal arteries and veins can be affected differently by different eye diseases, e.g., arterial narrowing and venous beading in diabetic retinopathy (DR). Therefore, differential artery-vein (AV) analysis can provide valuable information for early disease detection and better stage classification. However, manual, or semi-automated methods for AV identification are inefficient in a clinical setting. This study is to demonstrate the use of deep learning for automated AV classification in optical coherence tomography angiography (OCTA). We present ‘AV-Net’, a fully convolutional network (CNN) based on a modified Ushaped architecture. The input to AV-Net is a 2-channel system that combines grayscale enface OCT and OCTA. The enface OCT is a near infrared image, equivalent to a fundus image, which provides the vessel intensity profiles. In contrast, the OCTA contains the information of the blood flow strength, and vessel geometric features. The output of AV-Net is an RGB (red-green-blue) image, with R and B corresponding to arteries and veins, respectively, and the G channel represents the background. The dataset in this study is comprised of images from 50 individuals (20 controls and 30 DR patients). Transfer learning and regularization techniques, such as data augmentation and cross validation, were employed during training to prevent overfitting. The results reveal robust vessel segmentation and AV classification. A fully automated platform is essential for fostering efficient clinical deployment of AI-based screening, diagnosis, and treatment evaluation.
Diabetic retinopathy (DR) is a leading cause of preventable blindness. Early detection and reliable stage classification are essential to ensure prompt medical interventions. Recent study suggests that the outer retina, i.e., photoreceptors, can be affected by early DR. We demonstrate here the potential of using quantitative OCT features in outer retina for objective detection and stage classification of DR. The OCT intensity change is observed to be mostly sensitive, compared to retina thickness and bandwidth, to DR stages. It is also confirmed that the relative intensity changes of photoreceptor outer segment are more sensitive than inner segment for DR classification.
Early detection of diabetic retinopathy (DR) is an essential step to prevent vision losses. This study is the first effort to explore convolutional neural networks (CNNs) for transfer-learning based optical coherence tomography angiography (OCTA) detection and classification of DR. We employed transfer-learning using a pre-trained CNN, VGG16, based on the ImageNet dataset for classification of OCTA images. To prevent overfitting, data augmentation, e.g. rotations, flips, and zooming, and 5-fold cross-validation were implemented. A dataset comprising of 131 OCTA images from 20 control, 17 diabetic patients without DR (NoDR), and 60 nonproliferative DR (NPDR) patients were used for preliminary validation. Best classification performance was achieved with fine-tuning nine layers of the sixteen-layer CNN model.
Diabetic retinopathy (DR) is a major ocular manifestation of diabetes. DR can cause irreversible damage to the retina if not intervened timely. Therefore, early detection and reliable classification are essential for effective management of DR. As DR progresses into the proliferative stage (PDR), manifestation of localized neovascularization and complex capillary meshes are observed in the retina. These vascular complex structures can be quantified as biomarkers of transition of DR from no-proliferative to proliferative stage (NPDR). This study investigates four optical coherence tomography angiography (OCTA) features, i.e. vessel complexity index (VCI), fractal dimension (FD), four-point crossover (FCO), and blood vessel tortuosity (BVT), to quantify vascular complexity to distinguish NPDR from PDR eyes. OCTA images from 20 control, 60 NPDR and 56 PDR patients were analyzed. The univariate analysis showed that, with the progression of DR, all four complexity features increased with statistical significance (ANOVA, P < 0.05). A posthoc study showed that, only VCI and BVT were able to distinguish between NPDR and PDR. A multivariate logistic regression identified VCI and BVT as the most significant feature combination for NPDR vs PDR classification.
Diabetic retinopathy (DR) and other eye diseases can affect artery and vein differently. Therefore, differential artery-vein analysis can improve disease detection and treatment assessment. This study aims to establish color fundus image analysis guided artery-vein differentiation in OCTA, and to verify that differential artery-vein analysis can improve the sensitivity of OCTA detection and classification of DR. Briefly, optical density ratio (ODR) analysis and blood vessel tracking were combined to identify artery-vein in color fundus images. The fundus artery-vein map was used to register arteries and veins in corresponding OCTA images. Based on the fundus image guided artery-vein differentiation, quantitative analysis of arteries and veins in control and NPDR OCTA images were performed. The sensitivities of traditional mean blood vessel caliber (m-BVC) and artery-vein ratio of BVC (AVR-BVC) were quantitatively compared for DR classification. One way, multi-label analysis of variance (ANOVA) with Bonferroni’s test and Student t-test was employed for evaluating classification performance. Images from 20 eyes of 18 control subjects and 48 eyes of 35 NPDR patients (18 mild, 16 moderate and 14 severe NPDR) were used for this study. Compared to m-BVC, AVR-BVC provided enhanced sensitivity in differentiating NPDR stages. AVR-BVC was able to differentiate among control and three different NPDR groups. AVR-BVC could also differentiate control from mild NPDR, promising a unique OCTA biomarker for detecting early onset of NPDR.
It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.