Age-related macular degeneration (AMD) is a common ophthalmic disease, mainly occurring in the elderly. After the occurrence of pigment epithelial detachment (PED), neuroepithelial detachment and subretinal fluid (SRF) are further caused, and patients need follow-up treatment. Quantitative analysis of these two symptoms is very important for clinical diagnosis. Therefore, we propose a new joint segmentation network to accurately segment PED and SRF in this paper. Our main contributions are: (1) a new multi-scale information selection module is proposed. (2) based on the U-shape network, a novel decoder branch is proposed to obtain boundary information, which is critical to segmentation. The experimental results show that our method achieves 72.97% for the average dice (DSC), 79.92% for the average recall, and 67.11% for the average intersection over union (IOU).
Glaucoma is a progressive optic neuropathy characterized by changes in the structure of the optic nerve head and visual field which is one of the major irreversible blinding eye diseases worldwide. Early screening and timely diagnosis of glaucoma is of significant importance. In recent years, multi-modal deep learning methods have shown great advantages in image classification and segmentation tasks. In this paper, we propose a multi-modal glaucoma grading network with two main contributions: (1) To address the inherent shortage of multi-modal training data, conditional generative adversarial network (CGAN) is used to generate more synthetic images, extending the dataset over the only available dataset. (2) A multi-modality cross-attention (MMCA) module is proposed to further improve the classification accuracy.
Retinopathy of prematurity (ROP) is the main cause of blindness in children worldwide. The severity of ROP can be reflected by staging, zoning and plus disease. Specially, some studies have shown that zone recognition is more important than staging. However, due to the subjective factors, ophthalmologists are often inconsistent in their recognition of zones according to fundus images. Therefore, automated zones recognition of ROP is particularly important. In this paper, we propose a new ROP zones recognition network, in which pre-trained DenseNet121 is taken as backbone and a proposed attention block named Spatial and Channel Attention Block (SACAB) and deep supervision strategy are introduced. Our main contributions are: (1) Demonstrating the 2D convolutional neural network model pre-trained on natural images can be fine-tuned for automated zones recognition of ROP. (2) Based on pre-trained DenseNet121, we propose two improved schemes which effectively integrate attention mechanism and deep supervision learning for ROP zoning. The proposed method was evaluated on 662 retinal fundus images (82 zone I, 299 zone II, 281 zone III) from 148 examinations with 5-fold cross validation strategy. The results show that the performance of the proposed ROP zone recognition network achieves 0.8852 for accuracy (ACC), 0.8850 for weighted F1 score (W_F1) and 0.8699 for kappa. The preliminary experimental results show the effectiveness of the proposed method.
Retinopathy of prematurity (ROP) is an ocular disease which occurs in premature babies and is considered as one of the largest preventable causes of childhood blindness. However, insufficient ophthalmologists are qualified for ROP screening, especially in developing countries. Therefore, automated screening of ROP is particularly important. In this paper, we propose a new ROP screening network, in which pre-trained ResNet18 is taken as backbone and a proposed attention block named Complementary Residual Attention Block (CRAB) and Squeeze-and-Excitation (SE) block as channel attention module are introduced. Our main contributions are: (1) Demonstrating the 2D convolutional neural network model pre-trained on natural images can be fine-tuned for ROP screening. (2) Based on the pre-trained ResNet18, we propose an improved scheme combining which that effectively integrates attention mechanism for ROP screening. The proposed classification network was evaluated on 9794 fundus images from 650 subjects, in which 8351 are randomly selected as training set according to subjects and others are selected as testing set. The results showed that the performance of the proposed ROP screening network achieved 99.17% for accuracy, 98.65% for precision, 98.31% for recall, 98.48% for F1 score and 99.84% for AUC. The preliminary experimental results show the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.