Deep convolutional neural networks (CNN) have achieved great success in segmentation of retinal optical coherence tomography (OCT) images. However, images acquired by different devices or imaging protocols have relatively large differences in noise level, contrast and resolution. As a result, the performance of CNN tends to drop dramatically when tested on data with domain shifts. Unsupervised domain adaptation solves this problem by transferring knowledge from a domain with labels (source domain) to a domain without labels (target domain). Therefore, this paper proposes a two-stage domain adaptation algorithm for segmentation of retinal OCT images. First, after image-level domain shift reduction, the segmenter is trained with a supervised loss on the source domain, together with an adversarial loss given by the discriminator to minimize the domain gap. Then, the target domain data with satisfactory pseudo labels, measured by entropy, is used to fine-tune the segmenter, which further improves the generalization ability of model. Comprehensive experimental results of cross-domain choroid and retinoschisis segmentation demonstrate the effectiveness of this method. With domain adaptation, the Intersection over Union (IoU) is improved by 8.34% and 3.54% for the two tasks respectively.
At present, high myopia has become a hot spot for eye diseases worldwide because of its increasing prevalence. Linear lesion is an important clinical signal in the pathological changes of high myopia. ICGA is considered to be the “Ground Truth” for the diagnosis of linear lesions, but it is invasive and may cause adverse reactions such as allergy, dizziness, and even shock in some patients. Therefore, it is urgent to find a non-invasive imaging modality to replace ICGA for the diagnosis of linear lesions. Multi-color scanning laser (MCSL) imaging is a non-invasive imaging technique that can reveal linear lesion more richly than other non-invasive imaging technique such as color fundus imaging and red-free fundus imaging and some other invasive one such as fundus fluorescein angiography (FFA). To our best knowledge, there are no studies focusing on the linear lesion segmentation based on MCSL images. In this paper, we propose a new U-shape based segmentation network with multi-scale and global context fusion (SGCF) block named as SGCNet to segment the linear lesion in MCSL images. The features with multi-scales and global context information extracted by SGCF block are fused by learnable parameters to obtain richer high-level features. Four-fold cross validation was adopted to evaluate the performance of the proposed method on 86 MCSL images from 57 high myopia patients. The IoU coefficient, Dice coefficient, Sensitivity coefficient and Specialty are 0.494±0.109, 0.654±0.104, 0.676±0.131 and 0.998±0.002, respectively. Experiment results indicate the effectiveness of the proposed network.
Retinal detachment (RD) refers to the separation of the retinal neuroepithelium layer (RNE) and retinal pigment epithelium (RPE), and retinoschisis (RS) is characterized by the RNE splitting into multiple layers. Retinal detachment and retinoschisis are the main complications leading to vision loss in high myopia. Optical coherence tomography (OCT) is the main imaging method for observing retinal detachment and retinoschisis. This paper proposes a U-shaped convolutional neural network with a cross-fusion global feature module (CFCNN) to achieve automatic segmentation of retinal detachment and retinoschisis. Main contributions include: (1) A new cross-fusion global feature module (CFGF) is proposed. (2) The residual block is integrated into the encoder of the U-Net network to enhance the extraction of semantic information. The method was tested on a dataset consisting of 540 OCT B-scans. With the proposed CFCNN method, the mean Dice similarity coefficient of retinal detachment and retinoschisis segmentation reached 94.33% and 90.29% and were better than some existing advanced segmentation networks.
Pathologic myopia (PM) is a major cause of legal blindness in the world. Linear lesions are closely related to PM, which include two types of lesions in the posterior fundus of pathologic eyes in optical coherence tomography (OCT) images: retinal pigment epithelium-Bruch's membrane-choriocapillaris complex (RBCC) disruption and myopic stretch line (MSL). In this paper, a fully automated method based on U-shape network is proposed to segment RBCC disruption and MSL in retinal OCT images. Compared with the original U-Net, there are two main improvements in the proposed network: (1) We creatively propose a new downsampling module named as feature aggregation pooling module (FAPM), which aggregates context information and local information. (2) Deep supervision module (DSM) is adopted to help the network converge faster and improve the segmentation performance. The proposed method was evaluated via 3-fold crossvalidation strategy on a dataset composed of 667 2D OCT B-scan images. The mean Dice similarity coefficient, Sensitivity and Jaccard of RBCC disruption and MSL are 0.626, 0.665, 0.491 and 0.739, 0.814, 0.626, respectively. The primary experimental results show the effectiveness of our proposed method.
KEYWORDS: Optical coherence tomography, Image segmentation, Global system for mobile communications, Retina, Eye, Image fusion, Visualization, Convolution, Ophthalmology, Network architectures
The choroid is an important structure of the eye and choroid thickness distribution estimated from optical coherence tomography (OCT) images plays a vital role in analysis of many retinal diseases. This paper proposes a novel group-wise attention fusion network (referred to as GAF-Net) to segment the choroid layer, which can effectively work for both normal and pathological myopia retina. Currently, most networks perform unified processing of all feature maps in the same layer, which leads to not satisfactory choroid segmentation results. In order to improve this , GAF-Net proposes a group-wise channel module (GCM) and a group-wise spatial module (GSM) to fuse group-wise information. The GCM uses channel information to guide the fusion of group-wise context information, while the GSM uses spatial information to guide the fusion of group-wise context information. Furthermore, we adopt a joint loss to solve the problem of data imbalance and the uneven choroid target area. Experimental evaluations on a dataset composed of 1650 clinically obtained B-scans show that the proposed GAF-Net can achieve a Dice similarity coefficient of 95.21±0.73%.
Change of the thickness and volume of the choroid, which can be observed and quantified from optical coherence tomography (OCT) images, is a feature of many retinal diseases, such as aged-related macular degeneration and myopic maculopathy. In this paper, we make purposeful improvements on the U-net for segmenting the choroid of either normal or pathological myopia retina, obtaining the Bruch’s membrane (BM) and the choroidal-scleral interface (CSI). There are two main improvements to the U-net framework: (1) Adding a refinement residual block (RRB) to the back of each encoder. This strengthens the recognition ability of each stage; (2) The channel attention block (CAB) is integrated with the U-net. This enables high-level semantic information to guide the underlying details and handle the intra-class inconsistency problem. We validated our improved network on a dataset which consists of 952 OCT Bscans obtained from 95 eyes from both normal subjects and patients suffering from pathological myopia. Comparing with manual segmentation, the mean choroid thickness difference is 8μm, and the mean Dice similarity coefficient is 85.0%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.