Retinopathy of prematurity (ROP) is the main cause of blindness in children worldwide. The severity of ROP can be reflected by staging, zoning and plus disease. Specially, some studies have shown that zone recognition is more important than staging. However, due to the subjective factors, ophthalmologists are often inconsistent in their recognition of zones according to fundus images. Therefore, automated zones recognition of ROP is particularly important. In this paper, we propose a new ROP zones recognition network, in which pre-trained DenseNet121 is taken as backbone and a proposed attention block named Spatial and Channel Attention Block (SACAB) and deep supervision strategy are introduced. Our main contributions are: (1) Demonstrating the 2D convolutional neural network model pre-trained on natural images can be fine-tuned for automated zones recognition of ROP. (2) Based on pre-trained DenseNet121, we propose two improved schemes which effectively integrate attention mechanism and deep supervision learning for ROP zoning. The proposed method was evaluated on 662 retinal fundus images (82 zone I, 299 zone II, 281 zone III) from 148 examinations with 5-fold cross validation strategy. The results show that the performance of the proposed ROP zone recognition network achieves 0.8852 for accuracy (ACC), 0.8850 for weighted F1 score (W_F1) and 0.8699 for kappa. The preliminary experimental results show the effectiveness of the proposed method.
Retinopathy of prematurity (ROP) is an ocular disease which occurs in premature babies and is considered as one of the largest preventable causes of childhood blindness. However, insufficient ophthalmologists are qualified for ROP screening, especially in developing countries. Therefore, automated screening of ROP is particularly important. In this paper, we propose a new ROP screening network, in which pre-trained ResNet18 is taken as backbone and a proposed attention block named Complementary Residual Attention Block (CRAB) and Squeeze-and-Excitation (SE) block as channel attention module are introduced. Our main contributions are: (1) Demonstrating the 2D convolutional neural network model pre-trained on natural images can be fine-tuned for ROP screening. (2) Based on the pre-trained ResNet18, we propose an improved scheme combining which that effectively integrates attention mechanism for ROP screening. The proposed classification network was evaluated on 9794 fundus images from 650 subjects, in which 8351 are randomly selected as training set according to subjects and others are selected as testing set. The results showed that the performance of the proposed ROP screening network achieved 99.17% for accuracy, 98.65% for precision, 98.31% for recall, 98.48% for F1 score and 99.84% for AUC. The preliminary experimental results show the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.