KEYWORDS: Bone, Education and training, Image segmentation, Data modeling, 3D modeling, Performance modeling, Head, Computed tomography, Anatomy, Deep learning
The identification of pathologies in CT Angiography (CTA) is a laborious process. Even with the use of advanced post-processing techniques (such as Maximum Intensity Projection (MIP) and Volume Rendering (VR)) the analysis of the head and neck vasculature is a challenging task due to the interference from the surrounding osseous anatomy. To address these issues, we introduce an innovative solution in the form of an Artificial Intelligence (AI) reconstruction system. This system is supported by a 3D convolutional neural network, specially trained to automate the process of CTA reconstruction in healthcare services. In this study, we demonstrate a novel solution based on Deep Learning (DL) for the purpose of automatically segmenting skeletal structures, calcified plaque, and arterial vessels within CTA images. The advanced DL segmentation models that have been developed can perform accurately across different anatomies, scans, and reconstruction settings and allow superior visualization of vascular anatomy and pathology compared to other conventional techniques. These models have shown remarkable performance with a mean dice score of 0.985 for the bone structures. This high score, attained on an independent validation dataset that was kept separate during the training process, reflects the model's strength and potential for reliable application in real-world settings.
Bone skeleton segmentation is a fundamental step in medical image analysis applications, such as computer-aided orthopedic surgery, fracture detection, detecting and diagnosing bone pathology and degenerative diseases. The extraction of bones in CT scans is a challenging task, which done manually by experts is a time-consuming process. In this work, a deep learning (DL) based solution for automatic segmentation of skeletal structure in conventional CT images is presented. To address the task of creating a diverse, high quality training dataset, an iterative data annotation process is utilized. A small training dataset is created using human annotation effort and used to train a segmentation model. The model is then inferred to initialize the ground truths for new cases. The new ground truths are reviewed and edited as necessary by the human annotators and added to the training dataset. The process is repeated until the model performance no longer improves on a held-out validation dataset. Within a few iterations the model generalization and prediction performance are observed to improve as a function of training dataset size and variety. Human effort in the dataset labeling process is also noted to reduce significantly for every interaction. The final DL segmentation models perform well across anatomy, scan and reconstruction settings and achieve a mean dice score of 0.988 on a held out, independent validation dataset.
For any deep learning (DL) based task, model generalization and prediction performance improve as a function of training data set size and variety. However, its application to medical imaging is still challenging because of the limited availability of high-quality and sufficiently diverse annotated data. Data augmentation techniques can improve the model performance when the available dataset size is limited. Anatomy region localization from medical images can be automated with deep learning and is important for tasks such as organ segmentation and lesion detection. Different data augmentation methods were compared for DL based anatomy region localization with computed tomography images. The impact of different neural network architectures was also explored. The prediction accuracy on an independent test set improved from 88% to 97% with optimal selection of data augmentation and architecture while using the same training dataset. Data augmentation steps such as zoom, translation and flips had incremental effect on classifier performance whereas samplewise mean shift appeared to degrade the classifier performance. Global average pooling improved classifier accuracy compared to fully-connected layer when limited data augmentation was used. All model architectures converged to an optimal performance with the right combination of augmentation steps. Prediction inaccuracies were mostly observed in the boundary regions between anatomies. The networks also successfully localized anatomy for Positron Emission Tomography studies reaching an accuracy of up to 97%. Similar impact of data augmentation and pooling layer was also observed.
Calibrated detector response is crucial to good image quality in diagnostic CT and imaging systems in general. Defects during manufacturing, component failures and system aging can introduce shift in detector response which, if left uncorrected, can lead to image artifacts. Such artifacts reduce the image quality and can cause misdiagnosis in clinical practice. In this work a deep learning (DL)-based artifact detection method is developed to automatically screen for common imaging detector induced artifacts such as rings, streaks and bands in images. To circumvent the difficulty in obtaining and annotating the artifact images, a diagnostic CT physics simulator is utilized to generate CT images across a range of acquisition and reconstruction settings. Artifacts are introduced in the projection view data by perturbing the detector gain relative to the gain normalization scan during the simulation. The artifact images and corresponding ground truth segmentation of the artifact type and location serve as the training dataset. Linear support vector machine with squared hinge loss (L2-SVM) was used as the loss function during training as early experiments showed small but consistent improvements over the more commonly used cross-entropy loss for segmentation. The trained network achieved ~97%, ~86% and ~93% independent test accuracy for ring, streak and band artifacts respectively. Since deep learning methods learn by example, the detection method is not limited to the imaging scenarios presented in this work and can be extended to other applications.
A fast, GPU accelerated Monte Carlo engine for simulating relevant photon interaction processes over the diagnostic
energy range in third-generation CT systems was developed to study the relative contributions of bowtie and object
scatter to the total scatter reaching an imaging detector.
Primary and scattered projections for an elliptical water phantom (major axis set to 300mm) with muscle and fat inserts
were simulated for a typical diagnostic CT system as a function of anti-scatter grid (ASG) configurations. The ASG
design space explored grid orientation, i.e. septa either a) parallel or b) parallel and perpendicular to the axis of rotation,
as well as septa height. The septa material was Tungsten. The resulting projections were reconstructed and the scatter
induced image degradation was quantified using common CT image metrics (such as Hounsfield Unit (HU) inaccuracy
and loss in contrast), along with a qualitative review of image artifacts.
Results indicate object scatter dominates total scatter in the detector channels under the shadow of the imaged object
with the bowtie scatter fraction progressively increasing towards the edges of the object projection. Object scatter was
shown to be the driving factor behind HU inaccuracy and contrast reduction in the simulated images while shading
artifacts and elevated loss in HU accuracy at the object boundary were largely attributed to bowtie scatter. Because the
impact of bowtie scatter could not be sufficiently mitigated with a large grid ratio ASG, algorithmic correction may be
necessary to further mitigate these artifacts.
Scatter presents as a significant source of image artifacts in cone beam CT (CBCT) and considerable effort has been devoted to measuring the magnitude and influence of scatter. Scatter management includes both rejection and correction approaches, with anti-scatter grids (ASGs) commonly employed as a scatter rejection strategy. This work employs a Geant41,2 driven Monte Carlo model to investigate the impact of different ASG designs on scatter rejection performance across a range of scanner coverage along the patient axis. Scatter rejection is quantified in terms of scatter to primary ratio (SPR). One-dimensional (1D) ASGs (grid septa running parallel to patient axis) are compared across a range of septa height, septa width and septa material. Results indicate for a given septa width and patient coverage, SPR decreases with septa height but demonstrates diminishing returns for larger height values. For shorter septa heights, higher Z materials (e.g., Tungsten) exhibit superior scatter rejection to relatively lower Z materials (e.g., Molybdenum). For taller septa heights, the material difference is not as significant. SPR has a relatively weak dependence on septa width, with thicker septa giving lower SPR values at a given scanner coverage. The results are intended to serve as guide for designing post patient collimation for whole body CT scanners. Since taller grids with high Z materials pose a significant manufacturing cost, it is necessary to evaluate optimal ASG designs to minimize material and machining costs and to meet scatter rejection specifications at given patient coverage.
The design, initial imaging performance, and model-based optimization of a dedicated cone-beam CT (CBCT) scanner
for musculoskeletal extremities is presented. The system offers a compact scanner that complements conventional CT
and MR by providing sub-mm isotropic spatial resolution, the ability to image weight-bearing extremities, and the
capability for integrated real-time fluoroscopy and digital radiography. The scanner employs a flat-panel detector and a
fixed anode x-ray source and has a field of view of ~ (20x20x20) cm3. The gantry allows a "standing" configuration for
imaging of weight-bearing lower extremities and a "sitting" configuration for imaging of upper extremities and unloaded
lower extremities. Cascaded systems analysis guided the selection of x-ray technique (e.g., kVp, filtration, and dose) and
system design (e.g., magnification factor), yielding input-quantum-limited performance at detector signal of 100 times
the electronic noise, while maintaining patient dose below 5 mGy (a factor of ~2-3 less than conventional CT). A
magnification of 1.3 optimized tradeoffs between source and detector blur for a 0.5 mm focal spot. A custom antiscatter
grid demonstrated significant reduction of artifacts without loss of contrast-to-noise ratio or increase in dose. Image
quality in cadaveric specimens was assessed on a CBCT bench, demonstrating exquisite bone detail, visualization of
intra-articular morphology, and soft-tissue visibility approaching that of diagnostic CT. The capability to image loaded
extremities and conduct multi-modality CBCT/fluoroscopy with improved workflow compared to whole-body CT could
be of value in a broad spectrum of applications, including orthopaedics, rheumatology, surgical planning, and treatment
assessment. A clinical prototype has been constructed for deployment in pilot study trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.