We developed a novel high resolution 3D ultrasound B-scan (3D-UBS) imaging system that provides automated 3D acquisition and easily interpretable, interactive 3D visualization, including en-face and oblique views of the whole eye. Early and accurate diagnosis of ocular trauma and other associated injuries is essential for future prevention of complications. Conventional 2D ultrasound is limited in its use due to lack of trained ultrasonographer at point-of-care, difficulty in finding the optimal imaging plane and lack of anatomical context for easy interpretation. 2D hand-held ultrasound is also limited in case of perforated globes. Computed tomography (CT) is expensive and cannot be utilized if perforating intra-ocular foreign bodies (IOFBs) are small, non-metallic, or organic in nature. This study aimed to address the unmet clinical need for advanced 3D visualization of IOFBs and ocular injuries with 3D-UBS. We imaged porcine eye models for IOFBs. 3D-UBS enabled easily-obtained, informative images of ocular injuries, without an expert as required in conventional 2D ultrasound. En face and oblique views provided by multiplanar reformatting allows selection of optimal planes after acquisition. Size and shape of the IOFBs can be detected more accurately with 3D-UBS. 3D-UBS also provides information on location of IOFBs with respect to other important ocular structures. 3D-UBS shows 2.4 times contrast improvement compared to CT in wooden IOFB visualization. Our study demonstrated that novel 3D-UBS can be used for assessing ocular injuries (i.e., identifying the location, size, and shape of IOFBs) which can guide the treatment process.
PurposeRetinopathy of prematurity (ROP) is a retinal vascular disease affecting premature infants that can culminate in blindness within days if not monitored and treated. A disease stage for scrutiny and administration of treatment within ROP is “plus disease” characterized by increased tortuosity and dilation of posterior retinal blood vessels. The monitoring of ROP occurs via routine imaging, typically using expensive instruments ($50 to $140 K) that are unavailable in low-resource settings at the point of care.ApproachAs part of the smartphone-ROP program to enable referrals to expert physicians, fundus images are acquired using smartphone cameras and inexpensive lenses. We developed methods for artificial intelligence determination of plus disease, consisting of a preprocessing pipeline to enhance vessels and harmonize images followed by deep learning classification. A deep learning binary classifier (plus disease versus no plus disease) was developed using GoogLeNet.ResultsVessel contrast was enhanced by 90% after preprocessing as assessed by the contrast improvement index. In an image quality evaluation, preprocessed and original images were evaluated by pediatric ophthalmologists from the US and South America with years of experience diagnosing ROP and plus disease. All participating ophthalmologists agreed or strongly agreed that vessel visibility was improved with preprocessing. Using images from various smartphones, harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an area under the ROC curve of 0.9754 for plus disease on a limited dataset.ConclusionsPromising results indicate the potential for developing algorithms and software to facilitate the usage of cell phone images for staging of plus disease.
Retinopathy of prematurity (ROP) is a retinal vascular disease that affects premature infants and can result in blindness within days if not monitored and treated. A disease stage for increased scrutiny and treatment within ROP is “plus disease,” characterized by increased tortuosity and dilation of posterior retinal blood vessels. Monitoring of ROP occurs with routine imaging, typically using expensive instruments ranging from $50-140K. In low-resource areas of the world, smartphone cameras and inexpensive Volk 28D lenses are being used to image the fundus, albeit with lower fields of view and image quality than the expensive systems. We developed a preprocessing pipeline to enhance vessel visualization and harmonize images for automated analysis using deep learning algorithms. After preprocessing, vessel contrast was enhanced by 90% as assessed by the contrast improvement index. In an image quality evaluation, 441 images were evaluated by pediatric ophthalmologists from the US and South America, all with years of experience diagnosing ROP and plus disease. 100% of participating ophthalmologists either agreed or strongly agreed that vessel visibility was improved in the processed images. A preliminary deep learning binary classifier (plus vs. no plus disease) was developed using GoogLeNet. Using smartphone images harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an exceptional accuracy of 0.96 for plus disease on a limited dataset. These promising results suggest the potential to create algorithms and software to improve usage of cell phone images for ROP staging.
We developed a 3D ultrasound biomicroscopy (3D-UBM) imaging system and used it to assess ciliary tissues in the eye. As ultrasound can penetrate opaque ocular tissues, 3D-UBM has a unique ability to creating informative 3D visualization of anterior ocular structures not visible with other, optical imaging modalities. Ciliary body, located behind the iris, is responsible for fluid production making it an important ocular structure for glaucoma. Only 3DUBM allows visualization and measurements of ciliary body. Several steps were required for visualization and quantitative assessment. To reduce eye motion in 3D-UBM volumes, we performed slice alignment using Transformation Diffusion approach to avoid geometric artifacts. We applied noise reduction and aligned the volumes to the optic axis to create 3D renderings of ciliary body in its entirety. We extracted two different sets of images from these volumes, namely en face and radial images. We created a dataset of eye volumes with slices containing ciliary body, segmented by two analyst trainees and approved by two experts. Deep learning segmentation models (UNet and Inception-v3+) were trained on both sets of images using appropriate loss functions. Using en face images and Inception-v3+, and weighted cross entropy loss, we obtained Dice = 0.81±0.04. Using radial images, Inception-v3+, and with Dice loss, results were improved to Dice = 0.89±0.03, probably because radial images enable full usage of the symmetry of the eye. Cyclophotocoagulation (CPC) is a glaucoma treatment that is used to destroy the ciliary body partially or completely and reduce fluid production. 3D-UBM allows one to visualize and quantitatively analyze CPC treatments.
High frequency ultrasound biomicroscopy (UBM) images are used in clinical ophthalmology due to its ability to penetrate opaque tissues and create high resolution images of deeper intraocular structures. Because these inexpensive, high frequency (50 MHz) systems use single ultrasound elements, there is a limitation in visualizing small structures and anatomical landmarks, especially outside focal area, due to the lack of dynamic focusing. The wide and axially variant point spread function degrade image quality and obscure smaller structures. We created a fast, generative adversarial network (GAN) method to apply axially varying deconvolution for our 3D ultrasound biomicroscopy (3D-UBM) imaging system. Original images are enhanced using a computationally expensive axially varying deconvolution, giving paired original and enhanced images for GAN training. Supervised generative adversarial networks (pix2pix) were trained to generate enhanced images from originals. We obtained good performance metrics (SSIM = 0.85 and PSNR = 31.32 dB) in test images without any noticeable artifacts. GAN deconvolution runs at about 31 msec per frame on a standard graphics card, indicating that near real time enhancement is possible. With GAN enhancement, important ocular structures are made more visible.
We developed a methodology for 3D assessment of ciliary body of the eye, an important, but understudied tissue, using our new 3D ultrasound biomicroscopy (3D-UBM) imaging system. The ciliary body produces aqueous humor, which if not drained properly, can lead to increased intraocular pressure and glaucoma, a leading cause of blindness. Most medications and some surgical procedures for glaucoma target the ciliary body. Ciliary body is also responsible for focusing-accommodation by muscle contraction and relaxation. UBM is the only imaging modality which can be used to visualize structures behind the opaque iris, such as ciliary body. Our 3D-UBM acquires several hundred high resolutions (50 MHz) 2D-UBM images and creates a 3D volume, enabling heretofore unavailable en face visualizations and quantifications. In this study, we calculated unique 3D biometrics from automated segmentation using deep learning (UNet). Our results show accuracy of 0.93 ± 0.01, sensitivity of 0.79 ± 0.07 and dice score of 0.72 ± 0.07 on deep learning segmentation of ciliary muscle. For an eye, volume of ciliary body was 67.87 mm3, single ciliary process volumes were 0.234 ± 0.093 mm3 with surface areas adjacent to aqueous humor of 3.02 ± 1.07 mm2. Automated and manual measurements of ciliary muscle volume and cross-sectional area are compared which show overestimation in volume measurement but higher agreeability in cross-sectional area measurements.
KEYWORDS: Eye, 3D image processing, Ultrasonography, Image segmentation, Computer programming, 3D metrology, Stereoscopy, Iris, In vivo imaging, Optical coherence tomography
We created a new high resolution (50-MHz) three-dimensional ultrasound biomicroscopy (3D-UBM) imaging system and applied it to the measurement of iridoconeal angle, an important biomarker for glaucoma patients. Glaucoma, a leading cause of blindness, often results from poor drainage of the fluid from the eye through structures located at the iridiocorneal angle. Measurement of the angle has important implications for predicting the course of the disease and determining treatment strategies. An angle measured at a particular location with conventional 2D-UBM can be biased due to tilt in the hand-held probe. We created a 3D-UBM system by automatically scanning a 2D UBM with a precision translating stage. Using 3D-UBM, we typically acqure several hundred 2D images to create a high-resolution volume of the anterior chamber of the eye. Image pre-processing included intensity based frame-to-frame alignment to reduce effects of eye motion, 3D noise reduction, and multi-planar reformatting to create rotational views along the optic-axis with the pupil at the center, thereby giving views suitable for measurement of the iridiocorneal angle. Anterior chambers were segmented using a semantic-segmentation convolutional neural network, which gave folded “leave-one-eye-out” accuracy of 98.04%±0.01%, sensitivity of 90.97%±0.02%, specificity of 98.91%±0.01%, and Dice coefficient of 0.91±0.04. Using segmentations, iridiocorneal angles were automatically estimated using a modification of the semi-automated trabecular- iris-angle method (TIA) for each of ∼360 rotational views. Automated measurements were compared to those made by four ophthalmologist readers in eight images from two eyes. In these images, an insignificant difference (p = 0.996) was shown between readers and automated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.