KEYWORDS: Scanners, Data modeling, Education and training, Machine learning, Principal component analysis, Diseases and disorders, Neuroimaging, Data acquisition, Image acquisition, Head
PurposeDistributed learning is widely used to comply with data-sharing regulations and access diverse datasets for training machine learning (ML) models. The traveling model (TM) is a distributed learning approach that sequentially trains with data from one center at a time, which is especially advantageous when dealing with limited local datasets. However, a critical concern emerges when centers utilize different scanners for data acquisition, which could potentially lead models to exploit these differences as shortcuts. Although data harmonization can mitigate this issue, current methods typically rely on large or paired datasets, which can be impractical to obtain in distributed setups.ApproachWe introduced HarmonyTM, a data harmonization method tailored for the TM. HarmonyTM effectively mitigates bias in the model’s feature representation while retaining crucial disease-related information, all without requiring extensive datasets. Specifically, we employed adversarial training to “unlearn” bias from the features used in the model for classifying Parkinson’s disease (PD). We evaluated HarmonyTM using multi-center three-dimensional (3D) neuroimaging datasets from 83 centers using 23 different scanners.ResultsOur results show that HarmonyTM improved PD classification accuracy from 72% to 76% and reduced (unwanted) scanner classification accuracy from 53% to 30% in the TM setup.ConclusionHarmonyTM is a method tailored for harmonizing 3D neuroimaging data within the TM approach, aiming to minimize shortcut learning in distributed setups. This prevents the disease classifier from leveraging scanner-specific details to classify patients with or without PD—a key aspect for deploying ML models for clinical applications.
KEYWORDS: 3D modeling, 3D image processing, Brain, Data modeling, Neuroimaging, Medical imaging, Deep learning, Image processing, Artificial intelligence
Deep learning techniques for medical image analysis have reached comparable performance to medical experts, but the lack of reliable explainability leads to limited adoption in clinical routine. Explainable AI has emerged to address this issue, with causal generative techniques standing out by incorporating a causal perspective into deep learning models. However, their use cases have been limited to 2D images and tabulated data. To overcome this, we propose a novel method to expand a causal generative framework to handle volumetric 3D images, which was validated through analyzing the effect of brain aging using 40196 MRI datasets from the UK Biobank study. Our proposed technique paves the way for future 3D causal generative models in medical image analysis.
The difference between chronological age and predicted biological brain age, the so-called “brain age gap”, is a promising biomarker for assessment of overall brain health. It has also been suggested as a biomarker for early detection of neurological and cardiovascular conditions. The aim of this work is to identify group-level variability in the brain age gap between healthy subjects and patients with neurological and cardiovascular diseases. Therefore, a deep convolutional neural network was trained on UK Biobank T1-weighted-MRI datasets of healthy subjects (n=6860) to predict brain age. After training, the model was used to determine the brain age gap for healthy hold-out test subjects (n=344), and subjects with neurological (n=2327) or cardiovascular (n=6467) diseases. Next, saliency maps were analyzed to identify brain regions used by the model to render decisions. Linear bias correction was implemented to correct for the bias of age predictions made by the model. The trained model after bias correction achieved an average brain age gap of 0.05 years for the healthy test cohort while the neurological disease test cohort had an average brain age gap of 0.7 years, and the cardiovascular disease test cohort had an average brain age gap of 0.25 years. The average saliency maps appear similar for the three test group, suggesting that the model mostly uses brain areas associated with general brain aging patterns. This works results indicate potential in the brain age gap for differentiation of neurologic and cardiac patients from healthy aging patterns supporting its use as a novel biomarker.
Medical imaging datasets, such as magnetic resonance, are increasingly being used to investigate the genetic architecture of the brain. These images are commonly used as imaging-specific or –derived phenotypes when conducting genotype-phenotype association studies. When using this type of phenotype, multivariate genome-wide association study (GWAS) designs are considered better suited than univariate methods due to the ability to account for the inherent correlations between the phenotypes related to brain structures as determined from medical images. The main objective of this work is to establish and evaluate a comprehensive pipeline for investigating genotype-phenotype associations of the human brain using canonical component analysis. The proposed pipeline was tested to investigate genotype-phenotype associations between cortical brain region volumes in subjects with attention-deficit hyperactivity disorder as a proof-of-principle. Canonical component analysis, a form of multivariate GWAS and machine learning, was utilized to determine genotype-phenotype associations between cortical brain region volumes in subjects with attention-deficit hyperactivity disorder. Using the developed pipeline, several significant (p-value < 5E−04) single nucleotide polymorphisms were found that reside in or near several genes like DSCAM or DPYSL2 that are known to be associated with neurological and mental disorders or substance addiction, a common comorbidity for subjects with attention-deficit hyperactivity disorder. These clinically meaningful results show that the proposed pipeline using canonical component analysis can be used to investigate the genetic architecture of the brain.
Parkinson’s disease (PD) is the second most common neurodegenerative disease affecting 2-3% of the population over 65 years of age. Considerable research has investigated the benefit of using neuroimaging to improve PD diagnosis. However, it is challenging for medical experts to manually identify the subtle differences associated with PD in such complex data. It has been shown that machine learning models can achieve human-like accuracies for many computer-aided diagnosis applications. However, model performance usually depends on the amount and diversity of training data available, whereas most Parkinson’s disease classification models were trained on rather small datasets. Training data size and diversity can be increased by curating multi-site datasets. However, this may also increase biological and non-biological variances due to differences in participant cohorts, scanners, and data acquisition protocols. Thus, data harmonization is important to reduce those variances and enable the models to focus primarily on the patterns associated with PD. This work compares intensity harmonization techniques on 1796 MRI scans from twelve studies. Our results show that a histogram matching approach does not improve classification accuracy (78%) compared to the model trained on unharmonized data (baseline). However, it reduces the disparity between sensitivity and specificity from 81% and 73% to 77% and 79%, respectively. Moreover, combining histogram matching and least squares mean tissue intensity harmonization methods outperform the baseline model (accuracy of 74% compared to 67%) for an independent test set. Finally, our analysis considering sex (male, female) and groups (PD, healthy) shows that models trained on harmonized data exhibited reduced performance disparities between groups, which may be interpreted as a form of bias mitigation.
Purpose: Explainability and fairness are two key factors for the effective and ethical clinical implementation of deep learning-based machine learning models in healthcare settings. However, there has been limited work on investigating how unfair performance manifests in explainable artificial intelligence (XAI) methods, and how XAI can be used to investigate potential reasons for unfairness. Thus, the aim of this work was to analyze the effects of previously established sociodemographic-related confounders on classifier performance and explainability methods.Approach: A convolutional neural network (CNN) was trained to predict biological sex from T1-weighted brain MRI datasets of 4547 9- to 10-year-old adolescents from the Adolescent Brain Cognitive Development study. Performance disparities of the trained CNN between White and Black subjects were analyzed and saliency maps were generated for each subgroup at the intersection of sex and race.Results: The classification model demonstrated a significant difference in the percentage of correctly classified White male (90.3 % ± 1.7 % ) and Black male (81.1 % ± 4.5 % ) children. Conversely, slightly higher performance was found for Black female (89.3 % ± 4.8 % ) compared with White female (86.5 % ± 2.0 % ) children. Saliency maps showed subgroup-specific differences, corresponding to brain regions previously associated with pubertal development. In line with this finding, average pubertal development scores of subjects used in this study were significantly different between Black and White females (p < 0.001) and males (p < 0.001).Conclusions: We demonstrate that a CNN with significantly different sex classification performance between Black and White adolescents can identify different important brain regions when comparing subgroup saliency maps. Importance scores vary substantially between subgroups within brain structures associated with pubertal development, a race-associated confounder for predicting sex. We illustrate that unfair models can produce different XAI results between subgroups and that these results may explain potential reasons for biased performance.
KEYWORDS: Data modeling, Brain, Neuroimaging, Performance modeling, Machine learning, Data centers, Magnetic resonance imaging, Solid modeling, Medical research, Feature extraction
Limited access to medical datasets, due to regulations that protect patient data, is a major hinderance to the development of machine learning models for computer-aided diagnosis tools using medical images. Distributed learning is an alternative to training machine learning models on centrally collected data that solves data sharing issues. The main idea of distributed learning is to train models remotely at each medical center rather than collecting the data in a central database, thereby avoiding sharing data between centers and model developers. In this work, we propose a travelling model that performs distributed learning for biological brain age prediction using morphological measurements of different brain structures. We specifically investigate the impact of nonidentically distributed data between collaborators on the performance of the travelling model. Our results, based on a large dataset of 2058 magnetic resonance imaging scans, demonstrate that transferring the model weights between the centers more frequently achieves results (mean age prediction error = 5.89 years) comparable to central learning implementations (mean age prediction error = 5.93 years), which were trained using the data from all sites hosted together at a central location. Moreover, we show that our model does not suffer from catastrophic forgetting, and that data distribution is less important than the number of times that the model travels between collaborators.
Attention deficit/hyperactivity disorder (ADHD) is characterized by symptoms of inattention, hyperactivity, and impulsivity, which affects an estimated 10.2% of children and adolescents in the United States. However, correct diagnosis of the condition can be challenging, with failure rates up to 20%. Machine learning models making use of magnetic resonance imaging (MRI) have the potential to serve as a clinical decision support system to aid in the diagnosis of ADHD in youth to improve diagnostic validity. The purpose of this study was to develop and evaluate an explainable deep learning model for automatic ADHD classification. 254 T1-weighted brain MRI datsets of youth aged 9-11 were obtained from the Adolescent Brain Cognitive Development (ABCD) Study, and the Child Behaviour Checklist DSM-Oriented ADHD Scale was used to partition subjects into ADHD and non-ADHD groups. A fully convolutional neural network (CNN) adapted from a state-of-the-art adult brain age regression model was trained to distinguish between the neurologically normal children and children with ADHD. Saliency voxel attribution maps were generated to identify brain regions relevant for the classification task. The proposed model achieved an accuracy of 71.1%, sensitivity of 68.4%, and specificity of 73.7%. Saliency maps highlighted the orbitofrontal cortex, entorhinal cortex, and amygdala as important regions for the classification, which is consistent with previous literature linking these regions to significant structural differences in youth with ADHD. To the best of our knowledge, this is the first study applying artiicial intelligence explainability methods such as saliency maps to the classification of ADHD using a deep learning model. The proposed deep learning classification model has the potential to aid clinical diagnosis of ADHD while providing interpretable results.
Depending on the application, multiple imaging modalities are available for diagnosis in the clinical routine. As a result of this, repositories of patient scans often contain mixed modalities. This poses a challenge for image analysis methods, which require special modifications to work with multiple modalities. This is especially critical for deep learning-based methods, which require large amounts of data. Within this context, a typical example is follow-up imaging in acute ischemic stroke patients, which is an important step in determining potential complications from the evolution of a lesion. In this study, we addressed the mixed modalities issue by translating unpaired images between two of the most relevant follow-up stroke modalities, namely non-contrast computed tomography (NCCT) and fluid-attenuated inversion recovery (FLAIR) MRI. For the translation, we use the widely used cycle-consistent generative adversarial network (CycleGAN). To preserve stroke lesions after translation, we implemented and tested two modifications to regularize them: (1) we use manual segmentations of the stroke lesions as an attention channel when training the discriminator networks, and (2) we use an additional gradient-consistency loss to preserve the structural morphology. For the evaluation of the proposed method, 238 NCCT and 244 FLAIR scans from acute ischemic stroke patients were available. Our method showed a considerable improvement over the original CycleGAN. More precisely, it is capable to translate images between NCCT and FLAIR while preserving the stroke lesion’s shape, location, and modality-specific intensity (average Kullback-Leibler divergence improved from 2,365 to 396). Our proposed method has the potential of increasing the amount of available data used for existing and future applications while conserving original patient features and ground truth labels.
The efficacy of stroke treatments is highly time-sensitive, and any computer-aided diagnosis support method that can accelerate diagnosis and treatment initiation may improve patient outcomes. Within this context, lesion identification in MRI datasets can be time consuming and challenging, even for trained clinicians. Automatic lesion localization can expedite diagnosis by flagging datasets and corresponding regions of interest for further assessment. In this work, we propose a deep reinforcement learning agent to localize acute ischemic stroke lesions in MRI images. Therefore, we adapt novel techniques from the computer vision domain to medical image analysis, allowing the agent to sequentially localize multiple lesions in a single dataset. The proposed method was developed and evaluated using a database consisting of fluid attenuated inversion recovery (FLAIR) MRI datasets from 466 ischemic stroke patients acquired at multiple centers. 372 patients were used for training while 94 patients (20% of available data) were employed for testing. Furthermore, the model was tested using 58 datasets from an out-of-distribution test set to investigate the generalization error in more detail. The model achieved a Dice score of 0.45 on the hold-out test set and 0.43 on images from the out-of-distribution test set. In conclusion, we apply deep reinforcement learning to the clinically well-motivated task of localizing multiple ischemic stroke lesions in MRI images, and achieve promising results validated on a large and heterogeneous collection of datasets.
Deep learning in medical imaging typically requires sensitive and confidential patient data for model training. Recent research in computer vision has shown that it is possible to recover training data from trained models using model inversion techniques. In this paper, we investigate the degree to which encoder-decoder like architectures (U-Nets, etc) commonly used in medical imaging are vulnerable to simple model inversion attacks. Utilising a database consisting of 20 MRI datasets from acute ischemic stroke patients, we trained an autoencoder model for image reconstruction and a U-Net model for lesion segmentation. In the second step, model inversion decoders were developed and trained to reconstruct the original MRIs from the low dimensional representation of the trained autoencoder and the U-Net model. The inversion decoders were trained using 24 independent MRI datasets of acute stroke patients not used for training of the original models. Skull-stripped as well as the full original datasets including the skull and other non-brain tissues were used for model training and evaluation. The results show that the trained inversion decoder can be used to reconstruct training datasets after skull stripping given the latent space of the autoencoder trained for image reconstruction (mean correlation coefficient= 0.49), while it was not possible to fully reconstruct the original image used for training of a segmentation task UNet (mean correlation coefficient=0.18). These results are further supported by the structural similarity index measure (SSIM) scores, which show a mean SSIM score of 0.51± 0.14 for the autoencoder trained for image reconstruction, while the average SSIM score for the U-Net trained for the lesion segmentation task was 0.28±0.12. The same experiments were then conducted on the same images but without skull stripping. In this case, the U-Net trained for segmentation shows significantly worse results, while the autoencoder trained for image reconstruction is not affected. Our results suggest that an autoencoder model trained for image compression can be inverted with high accuracy while this is much harder to achieve for a U-Net trained for lesion segmentation.
Stroke is a leading cause of death and disability in the western hemisphere. Acute ischemic strokes can be broadly classified based on the underlying cause into atherosclerotic strokes, cardioembolic strokes, small vessels disease, and stroke with other causes. The ability to determine the exact origin of an acute ischemic stroke is highly relevant for optimal treatment decision and preventing recurrent events. However, the differentiation of atherosclerotic and cardioembolic phenotypes can be especially challenging due to similar appearance and symptoms. The aim of this study was to develop and evaluate the feasibility of an image-based machine learning approach for discriminating between arteriosclerotic and cardioembolic acute ischemic strokes using 56 apparent diffusion coefficient (ADC) datasets from acute stroke patients. For this purpose, acute infarct lesions were semi-atomically segmented and 30,981 geometric and texture image features were extracted for each stroke volume. To improve the performance and accuracy, categorical Pearson’s χ2 test was used to select the most informative features while removing redundant attributes. As a result, only 289 features were finally included for training of a deep multilayer feed-forward neural network without bootstrapping. The proposed method was evaluated using a leave-one-out cross validation scheme. The proposed classification method achieved an average area under receiver operator characteristic curve value of 0.93 and a classification accuracy of 94.64%. These first results suggest that the proposed image-based classification framework can support neurologists in clinical routine differentiating between atherosclerotic and cardioembolic phenotypes.
Parkinsonian syndromes encompass a spectrum of neurodegenerative diseases, which can be classified into various subtypes. The differentiation of these subtypes is typically conducted based on clinical criteria. Due to the overlap of intra-syndrome symptoms, the accurate differential diagnosis based on clinical guidelines remains a challenge with failure rates up to 25%. The aim of this study is to present an image-based classification method of patients with Parkinson’s disease (PD) and patients with progressive supranuclear palsy (PSP), an atypical variant of PD. Therefore, apparent diffusion coefficient (ADC) parameter maps were calculated based on diffusion-tensor magnetic resonance imaging (MRI) datasets. Mean ADC values were determined in 82 brain regions using an atlas-based approach. The extracted mean ADC values for each patient were then used as features for classification using a linear kernel support vector machine classifier. To increase the classification accuracy, a feature selection was performed, which resulted in the top 17 attributes to be used as the final input features. A leave-one-out cross validation based on 56 PD and 21 PSP subjects revealed that the proposed method is capable of differentiating PD and PSP patients with an accuracy of 94.8%. In conclusion, the classification of PD and PSP patients based on ADC features obtained from diffusion MRI datasets is a promising new approach for the differentiation of Parkinsonian syndromes in the broader context of decision support systems.
Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.
4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient’s magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.
The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.
The acute ischemic stroke is a leading cause for death and disability in the industry nations. In case of a present acute ischemic stroke, the prediction of the future tissue outcome is of high interest for the clinicians as it can be used to support therapy decision making. Within this context, it has already been shown that the voxel-wise multi-parametric tissue outcome prediction leads to more promising results compared to single channel perfusion map thresholding. Most previously published multi-parametric predictions employ information from perfusion maps derived from perfusion-weighted MRI together with other image sequences such as diffusion-weighted MRI. However, it remains unclear if the typically calculated perfusion maps used for this purpose really include all valuable information from the PWI dataset for an optimal tissue outcome prediction. To investigate this problem in more detail, two different methods to predict tissue outcome using a k-nearest-neighbor approach were developed in this work and evaluated based on 18 datasets of acute stroke patients with known tissue outcome. The first method integrates apparent diffusion coefficient and perfusion parameter (Tmax, MTT, CBV, CBF) information for the voxel-wise prediction, while the second method employs also apparent diffusion coefficient information but the complete perfusion information in terms of the voxel-wise residue functions instead of the perfusion parameter maps for the voxel-wise prediction. Overall, the comparison of the results of the two prediction methods for the 18 patients using a leave-one-out cross validation revealed no considerable differences. Quantitatively, the parameter-based prediction of tissue outcome led to a mean Dice coefficient of 0.474, while the prediction using the residue functions led to a mean Dice coefficient of 0.461. Thus, it may be concluded from the results of this study that the perfusion parameter maps typically derived from PWI datasets include all valuable perfusion information required for a voxel-based tissue outcome prediction, while the complete analysis of the residue functions does not add further benefits for the voxel-wise tissue outcome prediction and is also computationally more expensive.
Acute ischemic strokes are a major cause for death and severe neurologic deficits in the western hemisphere. The prediction of tissue outcome in case of an acute ischemic stroke is an important variable for treatment decision. An estimation of the expected outcome is typically obtained by thresholding a single perfusion parameter map, which is calculated from a perfusion CT dataset. However, cerebral perfusion is complex and the severity of perfusion impairment is not consistent within the penumbra of an acute ischemic stroke. Therefore, the application of only one parameter for acute stroke tissue outcome prediction may oversimplify the given problem. The aim of this study was to develop and evaluate the feasibility of a multiparametric approach for estimating tissue outcome in acute ischemic stroke patients using 15 CT perfusion datasets. For this purpose, perfusion parameter maps of cerebral blood flow, cerebral blood volume and mean transit time were calculated based on the concentration time curves derived from perfusion CT datasets. The parameter maps of ten patients were employed for a voxel-wise training of a support vector machine using ground-truth final infarct segmentations, whereas the remaining five patient datasets were used for evaluation of the voxel-wise prediction of tissue outcome using the trained support vector machine. Furthermore, tissue outcome was also predicted by optimal thresholding of corresponding time-to-peak (TTP) maps for comparison purposes. Both predictions were compared to ground-truth final infarct lesions for the five datasets used for evaluation. The proposed multiparametric tissue outcome prediction lead to superior prediction results in all cases. More precisely, the multiparametric prediction lead to a mean Dice coefficient of 0.556, while optimal thresholding of TTP maps lead to an average Dice-coefficient of 0.444 compared to the ground-truth infarct lesions. In conclusion, the evaluation results of the proposed method suggest that a multiparametric tissue outcome prediction may be feasible for CT perfusion datasets but needs to be evaluated in more detail.
Acute stroke is a major cause for death and disability among adults in the western hemisphere. Time-resolved
perfusion-weighted (PWI) and diffusion-weighted (DWI) MR datasets are typically used for the estimation
of tissue-at-risk, which is an important variable for acute stroke therapy decision-making. Although several
parameters, which can be estimated based on PWI concentration curves, have been proposed for tissue-at-risk
definition in the past, the time-to-peak (TTP) or time-to-max (Tmax) parameter is used most frequently in
recent trials. Unfortunately, there is no clear consensus which method should be used for estimation of Tmax or
TTP maps. Consequently, tissue-at-risk estimations and following treatment decision might vary considerably
with the method used. In this work, 5 PWI datasets of acute stroke patients were used to calculate TTP or Tmax
maps using 10 different estimation techniques. The resulting maps were segmented using a typical threshold
of +4s and the corresponding PWI-lesions were calculated. The first results suggest that the TTP or Tmax
method used has a major impact on the resulting tissue-at-risk volume. Numerically, the calculated volumes
differed up to a factor of 3. In general, the deconvolution-based Tmax techniques estimate the ischemic penumbra
rather smaller compared to direct TTP based techniques. In conclusion, the comparison of different methods
for TTP or Tmax estimation revealed high variations regarding the resulting tissue-at-risk volume, which might
lead to different therapy decisions. Therefore, a consensus how TTP or Tmax maps should be calculated seems
necessary.
Exact cerebrovascular segmentations based on high resolution 3D anatomical datasets are required for many
clinical applications. A general problem of most vessel segmentation methods is the insufficient delineation
of small vessels, which are often represented by rather low intensities and high surface curvatures. This paper
describes an improved direction-dependent level set approach for the cerebrovascular segmentation. The proposed
method utilizes the direction information of the eigenvectors computed by vesselness filters for adjusting the
weights of the internal energy depending on the location. The basic idea of this is to weight the internal energy
lower in case the gradient of the level set is comparable to the direction of the eigenvector extracted by the
vesselness filter. A quantitative evaluation of the proposed method based on three clinical Time-of-Flight MRA
datasets with available manual segmentations using the Tanimoto coefficient showed that a mean improvement
compared to the initial segmentation of 0.081 is achieved, while the corresponding level set segmentation without
integration of direction information does not lead to satisfying results. In summary, the proposed method enables
an improved delineation of small vessels, especially of those represented by low intensities and high surface
curvatures.
Exact segmentations of the cerebrovascular system are the basis for several medical applications, like preoperation
planning, postoperative monitoring and medical research. Several automatic methods for the extraction of the
vascular system have been proposed. These automatic approaches suffer from several problems. One of the
major problems are interruptions in the vascular segmentation, especially in case of small vessels represented by
low intensities. These breaks are problematic for the outcome of several applications e.g. FEM-simulations and
quantitative vessel analysis. In this paper we propose an automatic post-processing method to connect broken
vessel segmentations. The approach proposed consists of four steps. Based on an existing vessel segmentation
the 3D-skeleton is computed first and used to detect the dead ends of the segmentation. In a following step
possible connections between these dead ends are computed using a graph based approach based on the vesselness
parameter image. After a consistency check is performed, the detected paths are used to obtain the final
segmentation using a level set approach. The method proposed was validated using a synthetic dataset as well as
two clinical datasets. The evaluation of the results yielded by the method proposed based on two Time-of-Flight
MRA datasets showed that in mean 45 connections between dead ends per dataset were found. A quantitative
comparison with semi-automatic segmentations by medical experts using the Dice coefficient revealed that a
mean improvement of 0.0229 per dataset was achieved. In summary the approach presented can considerably
improve the accuracy of vascular segmentations needed for following analysis steps.
KEYWORDS: Visualization, 3D modeling, Hemodynamics, 3D image processing, Cerebral blood flow, Image visualization, Temporal resolution, 3D visualizations, Magnetic resonance imaging, Data modeling
In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D
magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were
acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of
the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system
is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in
the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference
curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both
MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points
of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using
the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those
computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested
allows a dynamic visualization of the individual hemodynamic situation and better understanding during the
visual evaluation of cerebral vascular diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.