Managing patients with hydrocephalus and cerebrospinal fluid disorders requires repeated head imaging. In adults, this is typically done with computed tomography (CT) or less commonly magnetic resonance imaging (MRI). However, CT poses cumulative radiation risks and MRI is costly. Transcranial ultrasound is a radiation-free, relatively inexpensive, and optionally point-of-care alternative. The initial use of this modality has involved measuring gross brain ventricle size by manual annotation. In this work, we explore the use of deep learning to automate the segmentation of brain right ventricle from transcranial ultrasound images. We found that the vanilla U-Net architecture encountered difficulties in accurately identifying the right ventricle, which can be attributed to challenges such as limited resolution, artifacts, and noise inherent in ultrasound images. We further explore the use of coordinate convolution to augment the U-Net model, which allows us to take advantage of the established acquisition protocol. This enhancement yielded a statistically significant improvement in performance, as measured by the Dice similarity coefficient. This study presents, for the first time, the potential capabilities of deep learning in automating hydrocephalus assessment from ultrasound imaging.
Magnetic Resonance Imaging with tagging (tMRI) has long been utilized for quantifying tissue motion and strain during deformation. However, a phenomenon known as tag fading, a gradual decrease in tag visibility over time, often complicates post-processing. The first contribution of this study is to model tag fading by considering the interplay between T1 relaxation and the repeated application of radio frequency (RF) pulses during serial imaging sequences. This is a factor that has been overlooked in prior research on tMRI post-processing. Further, we have observed an emerging trend of utilizing raw tagged MRI within a deep learning-based (DL) registration framework for motion estimation. In this work, we evaluate and analyze the impact of commonly used image similarity objectives in training DL registrations on raw tMRI. This is then compared with the Harmonic Phase-based approach, a traditional approach which is claimed to be robust to tag fading. Our findings, derived from both simulated images and an actual phantom scan, reveal the limitations of various similarity losses in raw tMRI and emphasize caution in registration tasks where image intensity changes over time.
PurposeDiagnosis and surveillance of thoracic aortic aneurysm (TAA) involves measuring the aortic diameter at various locations along the length of the aorta, often using computed tomography angiography (CTA). Currently, measurements are performed by human raters using specialized software for three-dimensional analysis, a time-consuming process, requiring 15 to 45 min of focused effort. Thus, we aimed to develop a convolutional neural network (CNN)-based algorithm for fully automated and accurate aortic measurements.ApproachUsing 212 CTA scans, we trained a CNN to perform segmentation and localization of key landmarks jointly. Segmentation mask and landmarks are subsequently used to obtain the centerline and cross-sectional diameters of the aorta. Subsequently, a cubic spline is fit to the aortic boundary at the sinuses of Valsalva to avoid errors related inclusions of coronary artery origins. Performance was evaluated on a test set of 60 scans with automated measurements compared against expert manual raters.ResultCompared to training separate networks for each task, joint training yielded higher accuracy for segmentation, especially at the boundary (p < 0.001), but a marginally worse (0.2 to 0.5 mm) accuracy for landmark localization (p < 0.001). Mean absolute error between human and automated was ≤1 mm at six of nine standard clinical measurement locations. However, higher errors were noted in the aortic root and arch regions, ranging between 1.4 and 2.2 mm, although agreement of manual raters was also lower in these regions.ConclusionFully automated aortic diameter measurements in TAA are feasible using a CNN-based algorithm. Automated measurements demonstrated low errors that are comparable in magnitude to those with manual raters; however, measurement error was highest in the aortic root and arch.
Diagnosis of thoracic aortic aneurysm typically involves measuring the diameters at various locations on the aorta from computed tomography angiograms (CTAs). Human measurement is time-consuming and suffers from inter and intra-user variability, motivating the need for automated, repeatable measurement software. This work presents a convolutional neural network (CNN)-based algorithm for fully automated aortic measurements. We employ the CNN to perform aortic segmentation and localization of key landmarks jointly, which performs better than individual models for each task. The segmentation mask and landmarks are subsequently used to obtain the centerline and cross-sectional diameters of the aorta using a combination of image processing techniques. We gather a dataset of CTAs from patients with ongoing imaging surveillance of thoracic aortic aneurysm and demonstrate the performance of our algorithm by quantitative comparisons against measurements from human raters. We observe that for most locations, the mean absolute error between human and computer-generated measurements is less than 1 mm, which is at or lower than the level of variability in human measurements. Furthermore, we showcase the behavior of our method through various visual examples, discuss its limitations and propose possible improvements.
Connectivity information derived from diffusion-weighted magnetic resonance images (DW-MRIs) plays an important role in studying human subcortical gray matter structures. However, due to the O(N2 ) complexity of computing the connectivity of each voxel to every other voxel (or multiple ROIs), the current practice of extracting connectivity information is highly inefficient. This makes the processing of high-resolution images and population-level analyses very computationally demanding. To address this issue, we propose a more efficient way to extract connectivity information; briefly, we consider two regions/voxels to be connected if a white matter fiber streamline passes through them—no matter where the streamline originates. We consider the thalamus parcellation task for demonstration purposes; our experiments show that our approach brings a 30 to 120 times speedup over traditional approaches with comparable qualitative parcellation results. We also demonstrate high-resolution connectivity features can be super-resolved from low-resolution DW-MRI in our framework. Together, these two innovations enable higher resolution connectivity analysis from DW-MRI. Our source code is availible at jasonbian97.github.io/fastcod.
The thalamus is a subcortical gray matter structure that plays a key role in relaying sensory and motor signals within the brain. Its nuclei can atrophy or otherwise be affected by neurological disease and injuries including mild traumatic brain injury. Segmenting both the thalamus and its nuclei is challenging because of the relatively low contrast within and around the thalamus in conventional magnetic resonance (MR) images. This paper explores imaging features to determine key tissue signatures that naturally cluster, from which we can parcellate thalamic nuclei. Tissue contrasts include T1-weighted and T2-weighted images, MR diffusion measurements including FA, mean diffusivity, Knutsson coefficients that represent fiber orientation, and synthetic multi-TI images derived from FGATIR and T1-weighted images. After registration of these contrasts and isolation of the thalamus, we use the uniform manifold approximation and projection (UMAP) method for dimensionality reduction to produce a low-dimensional representation of the data within the thalamus. Manual labeling of the thalamus provides labels for our UMAP embedding from which k nearest neighbors can be used to label new unseen voxels in that same UMAP embedding. N-fold cross-validation of the method reveals comparable performance to state-of-the-art methods for thalamic parcellation.
Analysis of tongue motion has been proven useful in gaining a better understanding of speech and swallowing disorders. Tagged magnetic resonance imaging (MRI) has been used to image tongue motion, and the harmonic phase processing (HARP) method has been used to compute 3D motion from these images. However, HARP can fail with large motions due to so-called tag (or phase) jumping, yielding highly inaccurate results. The phase vector incompressible registration algorithm (PVIRA) was developed using the HARP framework to yield smooth, incompressible, and diffeomorphic motion fields, but it can also suffer from tag jumping. In this paper, we propose a new method to avoid tag jumping occurring in the later frames of tagged MR image sequences. The new approach uses PVIRA between successive time frames and then adds their stationary velocity fields to yield a starting point from which to initialize a final PVIRA stage between troublesome frames. We demonstrate on multiple data sets that this method avoids tag jumping and produces superior motion estimates compared with existing methods.
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a non-invasive way of imaging white matter tracts in the human brain. DW-MRIs are usually acquired using echo-planar imaging (EPI) with high gradient fields, which could introduce severe geometric distortions that interfere with further analyses. Most tools for correcting distortion require two minimally weighted DW-MRI images (B0) acquired with different phase-encoding directions, and they can take hours to process per subject. Since a great amount of diffusion data are only acquired with a single phase-encoding direction, the application of existing approaches is limited. We propose a deep learning-based registration approach to correct distortion using only the B0 acquired from a single phase-encoding direction. Specifically, we register undistorted T1-weighted images and distorted B0 to remove the distortion through a deep learning model. We apply a differentiable mutual information loss during training to improve inter-modality alignment. Experiments on the Human Connectome Project dataset show the proposed method outperforms SyN and VoxelMorph on several metrics, and only takes a few seconds to process one subject.
Landmark detection is a critical component of the image processing pipeline for automated aortic size measurements. Given that the thoracic aorta has a relatively conserved topology across the population and that a human annotator with minimal training can estimate the location of unseen landmarks from limited examples, we proposed an auxiliary learning task to learn the implicit topology of aortic landmarks through a CNN-based network. Specifically, we created a network to predict the location of missing landmarks from the visible ones by minimizing the Implicit Topology loss in an end-to-end manner. The proposed learning task can be easily adapted and combined with Unet-style backbones. To validate our method, we utilized a dataset consisting of 207 CTAs, labeling four landmarks on each aorta. Our method outperforms the state-of-the-art Unet-style architectures (ResUnet, UnetR) in terms of localization accuracy, with only a light (#params=0.4M) overhead. We also demonstrate our approach in two clinically meaningful applications: aortic sub-region division and automatic centerline generation.
Accurate segmentation of the aorta in computed tomography angiography (CTA) images is the first step for analysis of diseases such as aortic aneurysm, but manual segmentation can be prohibitively time-consuming and error prone. Convolutional neural network (CNN) based models have been utilized for automated segmentation of anatomy in CTA scans, with the ubiquitous U-Net being one of the most popular architectures. For many downstream image analysis tasks (e.g., registration, diameter measurement) very accurate segmentation accuracy may be required. In this work, we developed and tested a U-Net model with attention gating for segmentation of the thoracic aorta in clinical CTA data of patients with thoracic aortic aneurysm. Attention gating helps the model focus on difficult to segment target structures automatically and has been previously shown to increase segmentation accuracy in other applications. We trained U-Nets both with and without attention gating on 145 CTAs. Performance of the models were evaluated by calculating the DCS and Average Hausdorff Distance (AHD) on a test set of 20 CTAs. We found that the U-Net with attention gating yields more accurate segmentation than the U-Net without attention gating (DCS 0.966±0.028 vs. 0.944±0.022, AHD 0.189±0.134mm vs. 0.247±0.155mm). Furthermore, we explored the segmentation accuracy of this U-Net for multi-class labeling of various anatomic segments of the thoracic aorta, and found an average DCS of 0.86 for across 7 different labels. We conclude that the U-Net with attention gating improves segmentation performance and may aid segmentation tasks that require high levels of accuracy.
Thoracic aortic aneurysm (TAA) growth is currently assessed by changes in maximal aortic diameter (i.e., radial growth of the vessel). However, there is growing awareness that longitudinal aortic growth (i.e., elongation) is an important metric of disease status, albeit one that is difficult to measure using current clinical techniques. Previously we have proposed a method to assess 3D changes in aortic wall growth/deformation using deformable image registration with interpolation of the spatial Jacobian determinant to the aortic surface. Here we propose a method to re-orient the Jacobian into directional components relative to the aortic surface rather than the image space, allowing clinicians and researchers to isolate and study the pathologic effects of each directional component of growth separately. To this end, we first perform a deformable image registration between two aortic geometries. Second, we segment the aortic surface and centerline in the fixed image, and use the resulting geometry to construct anatomically-based local coordinate systems at each voxel of the aortic surface. Using the Jacobian matrix field resulting from the deformable registration, we can obtain the anatomically oriented Jacobian components by rotating the Jacobian matrix at each voxel so that it is aligned with the anatomically based local coordinate system. Through experiments on toy cylinders and real clinical cases, we show clear differences between the Jacobian determinant and its directional components, with the directional Jacobian component being able to remove one directional change (e.g., longitudinal) while maintaining the other (e.g., cross-sectional).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.