Significance: Automatic and accurate classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images is essential for assisting ophthalmologist in the diagnosis and grading of macular diseases. Therefore, more effective OCT volume classification for automatic recognition of macular diseases is needed.
Aim: For OCT volumes in which only OCT volume-level labels are known, OCT volume classifiers based on its global feature and deep learning are designed, validated, and compared with other methods.
Approach: We present a general framework to classify OCT volume for automatic recognizing macular diseases. The architecture of the framework consists of three modules: B-scan feature extractor, two-dimensional (2-D) feature map generation, and volume-level classifier. Our architecture could address OCT volume classification using two 2-D image machine learning classification algorithms. Specifically, a convolutional neural network (CNN) model is trained and used as a B-scan feature extractor to construct a 2-D feature map of an OCT volume and volume-level classifiers such as support vector machine and CNN with/without attention mechanism for 2-D feature maps are described.
Results: Our proposed methods are validated on the publicly available Duke dataset, which consists of 269 intermediate age-related macular degeneration (AMD) volumes and 115 normal volumes. Fivefold cross-validation was done, and average accuracy, sensitivity, and specificity of 98.17%, 99.26%, and 95.65%, respectively, are achieved. The experiments show that our methods outperform the state-of-the-art methods. Our methods are also validated on our private clinical OCT volume dataset, consisting of 448 AMD volumes and 462 diabetic macular edema volumes.
Conclusions: We present a general framework of OCT volume classification based on its 2-D feature map and CNN with attention mechanism and describe its implementation schemes. Our proposed methods could classify OCT volumes automatically and effectively with high accuracy, and they are a potential practical tool for screening of ophthalmic diseases from OCT volume.
In conventional retinal region detection methods for optical coherence tomography (OCT) images, many parameters need to be set manually, which is often detrimental to their generalizability. We present a scheme to detect retinal regions based on fully convolutional networks (FCN) for automatic diagnosis of abnormal maculae in OCT images. The FCN model is trained on 900 labeled age-related macular degeneration (AMD), diabetic macular edema (DME) and normal (NOR) OCT images. Its segmentation accuracy is validated and its effectiveness in recognizing abnormal maculae in OCT images is tested and compared with traditional methods, by using the spatial pyramid matching based on sparse coding (ScSPM) classifier and Inception V3 classifier on two datasets: Duke dataset and our clinic dataset. In our clinic dataset, we randomly selected half of the B-scans of each class (300 AMD, 300 DME, and 300 NOR) for training classifier and the rest (300 AMD, 300 DME, and 300 NOR) for testing with 10 repetitions. Average accuracy, sensitivity, and specificity of 98.69%, 98.03%, and 99.01% are obtained by using ScSPM classifier, and those of 99.69%, 99.53%, and 99.77% are obtained by using Inception V3 classifier. These two classification algorithms achieve 100% classification accuracy when directly applied to Duke dataset, where all the 45 OCT volumes are used as test set. Finally, FCN model with or without flattening and cropping and its influence on classification performance are discussed.
In recent years, several studies have shown that the canine retina model offers important insight for our understanding of human retinal diseases. Several therapies developed to treat blindness in such models have already moved onto human clinical trials, with more currently under development [1]. Optical coherence tomography (OCT) offers a high resolution imaging modality for performing in-vivo analysis of the retinal layers. However, existing algorithms for automatically segmenting and analyzing such data have been mostly focused on the human retina. As a result, canine retinal images are often still being analyzed using manual segmentations, which is a slow and laborious task. In this work, we propose a method for automatically segmenting 5 boundaries in canine retinal OCT. The algorithm employs the position relationships between different boundaries to adaptively enhance the gradient map. A region growing algorithm is then used on the enhanced gradient maps to find the five boundaries separately. The automatic segmentation was compared against manual segmentations showing an average absolute error of 5.82 ± 4.02 microns.
Optical coherence tomography (OCT) is a recent imaging method that allows high-resolution, cross-sectional imaging through tissues and materials. Over the past 18 years, OCT has been successfully used in disease diagnosis, biomedical research, material evaluation, and many other domains. As OCT is a recent imaging method, until now surgeons have limited experience using it. In addition, the number of images obtained from the imaging device is too large, so we need an automated method to analyze them. We propose a novel method for automated classification of OCT images based on local features and earth mover's distance (EMD). We evaluated our algorithm using an OCT image set which contains two kinds of skin images, normal skin and nevus flammeus. Experimental results demonstrate the effectiveness of our method, which achieved classification accuracy of 0.97 for an EMD+KNN scheme and 0.99 for an EMD+SVM (support vector machine) scheme, much higher than the previous method. Our approach is especially suitable for nonhomogeneous images and could be applied to a wide range of OCT images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.