The Foveal Avascular Zone (FAZ) is of clinical importance since the retinal vascular arrangement around the fovea changes with retinal vascular diseases and in high myopic eyes. Therefore, it is important to segment and quantify the FAZs accurately. Using a novel location-aware deep learning method the FAZ boundary was segmented in en-face optical coherence tomography angiography (OCTA) images. The FAZ dimensions were compared the parameters determined using four methods: (1) device in-built software (Cirrus 5000 Angioplex), (2) manual segmentation using Image J software by an experienced clinician, and (4) the new method (new location-aware deep-learning method). The parameters were measured from OCTA data from healthy subjects (n=34) and myopic patients (n=66). For this purpose, FAZ location was manually delineated in en-face OCTA images of dimensions 420x420 pixels corresponding to 6mm x 6mm. A modified UNet segmentation with an additional channel from a Gaussian distribution around the likely location of the FAZ was designed and trained using 100 manually segmented OCTA images. The predicted FAZ and the related parameters were then obtained using a test dataset consisting of 100 images. For analysis, two strategies were applied. The segmentation of FAZ was compared using the Dice coefficient and Structural Similarity Index (SSIM) to determine the effectiveness of the proposed deep learning method when compared to the other three methods. Furthermore, to provide deeper insight, a set of FAZ dimensions namely area, perimeter, circularity index, eccentricity, perimeter, major axis, minor axis, inner circle radius, circumcircle radius, the maximum and minimum boundary dimensions, and orientation of major axis were compared between the 3 methods. Finally, vessel-related parameters including tortuosity, vessel diameter index (VDI) and vessel avascular density (VAD) were calculated and compared. The high myopic eyes exhibited a narrowing the FAZ area and perimeter. The currently developed algorithm does not correct for axial length variations. This analysis should be extended with a larger number of images in each group of myopia as well as correcting for axial length variations.
Retinal 3D Optical coherence tomography (OCT) is a non-invasive imaging modality in ocular diseases. Due to large volumes of OCT data, it is better to utilize automatic extraction of information from OCT images, such as total retinal thickness and retinal nerve fiber layer thickness (RNFLT). These two thickness values have become useful indices to indicate the progress of diseases like glaucoma, according to the asymmetry between two eyes of an individual. Furthermore, the loss of ganglion cells may not be diagnosable by other tests and even not be evaluated when we only consider the thickness of one eye (due to dramatic different thickness among individuals). This can justify our need to have a comparison between thicknesses of two eyes in symmetricity. Therefore, we have proposed an asymmetry analysis of the retinal nerve layer thickness and total retinal thickness around the macula in the normal Iranian population. In the first step retinal borders are segmented by diffusion map method and thickness profiles were made. Then we found the middle point of the macula by pattern matching scheme. RNFLT and retinal thickness are analyzed in 9 sectors and the mean and standard deviation of each sector in the right and left eye are obtained. The maximums of the average RNFL thickness in right and left eyes are seen in the perifoveal nasal, and the minimums are seen in the fovea. Tolerance limits in RNFL thickness is shown to be between 0.78 to 2.4 μm for 19 volunteers used in this study.
Optical Coherence Tomography (OCT) suffers from speckle noise which causes erroneous interpretation. OCT denoising methods may be studied in "raw image domain" and "sparse representation". Comparison of mentioned denoising strategies in magnitude domain shows that wavelet-thresholding methods had the highest ability, among which wavelets with shift invariant property yielded better results. We chose dictionary learning to improve the performance of available wavelet-thresholding by tailoring adjusted dictionaries instead of using pre-defined bases. Furthermore, in order to take advantage of shift invariant wavelets, we introduce a new scheme in dictionary learning which starts from a dual tree complex wavelet. we investigate the performance of different speckle reduction methods: 2D Conventional Dictionary Learning (2D CDL), Real part of 2D Dictionary Learning with start dictionary of dual-tree Complex Wavelet Transform (2D RCWDL) and Imaginary part of 2D Dictionary Learning with start dictionary of dual-tree Complex Wavelet Transform (2D ICWDL). It can be seen that the performance of the proposed method in 2D R/I CWDL are considerably better than other methods in CNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.