Purpose: Existing maximum-likelihood (ML) methods in computed tomography usually require significant computing resources to implement, and/or are limited to particular measurement noise models that are representative of the simplest theoretical archetypes. There is an absence of general procedures to produce rapid ML methods that account precisely for the noise model of a given experiment. We investigate a mathematical-computational procedure of producing constrained quadratic optimization reconstruction algorithms that fill this niche, requiring less computing resources than the exact (expectation-maximization) procedures and having comparable performance with least-squares iterative methods. This allows high-fidelity reconstructions to be practically achievable for largely arbitrary noise models.
Approach: We identify a systematic mathematical procedure to produce constrained quadratic optimization methods that maximize tomogram likelihood under arbitrary noise models, which are tunable to specific characteristics of the experiment. This procedure is applied to a general theory of mixed Poisson–Gaussian noise in transmission tomography, and to a theory of invertible linear transformations of measurement intensity subject to Poisson noise. We perform tomographic reconstructions of a very highly attenuating two-dimensional object phantom and compare the speed and fidelity of reconstruction with alternative quadratic metrics (ℓ2—minimization among others).
Results: Quantitative metrics reveal that reconstructions under our systematically produced quadratic methods achieved significantly greater reconstruction fidelity with less computation than the optimized conventional, untuned quadratic metrics with a comparable procedure.
Conclusion: Constrained quadratic optimization methods appear to apply sufficiently good approximations to achieve a high reconstruction fidelity with a simple quadratic metric amenable to a broad class of minimization methods. These preliminary simulation-based results are very promising and suggest that such methods may be used to produce high-fidelity reconstructions with less computation than many other statistical methods. By design, these quadratic methods are also explicit and quantitative in their description, allowing fine-tuning according to the specific uncertainties and noise model of the experiment. Further research is required to ascertain the full practical potential of these methods.
We introduce a method to extract density information from an x-ray computed tomography (XCT) volume that is more accurate than simply assuming density is proportional to CT number. XCT is a versatile tool for analysis, however, for lab-based XCT machines that employ polychromatic x-rays, it is difficult to extract anything more than the crudest quantitative data from the sample. Reconstructed tomograms values are, in theory, the x-ray attenuation coefficients of the material. However, due to the polychromatic nature of the beam, and effects such as beam hardening, such an interpretation of real data is rarely feasible. The Alvarez-Macovski (AM) equation, which is used in quantitative XCT reconstruction algorithms, provides a model of x-ray attenuation. We use the AM equation to extract quantitative information from conventionally reconstructed tomograms, provided it is not too severely affected by beam-hardening artefacts. In essence, we assume that the tomogram values are proportional to the attenuation coefficients of the AM equation at a mean x-ray energy. Then, given a calibration scan which contains enough materials, we can solve the the AM equation for the unknown coefficients and exponents. We then apply it to tomograms of objects with similar shape and material composition. The quantitative data extracted thus provides a more accurate estimate of both per-material density and bulk density.
Statistical reconstruction methods in X-ray Computed Tomography (XCT) are well-regarded for their ability to produce more accurate and artefact-free reconstructed volumes, in the presence of measurement noise. Maximum-likelihood methods are particularly salient and have been shown to result in superior reconstruction quality, compared with methods that minimise the ℓ2 residual between measured and projected line attenuations. Least-squares more generally may refer to the minimisation of quadratic forms of the projected attenuation residuals. Early maximum-likelihood methods showed promising reconstruction capabilities but were not practical to implement due to very slow convergence, especially compared with least-squares methods. More recently, leastsquares methods have been adapted to minimise quadratic approximations to (negative) log-likelihood, thereby attaining the speed of least-squares minimisation in service of likelihood maximisation for superior reconstruction fidelity. Quadratic approximation to the log-likelihood under Poisson measurement statistics has been demonstrated several times in the literature. In this publication we describe an approach to quadratically expanding loglikelihood under an arbitrary noise model, and demonstrate via simulation that this can be implemented practically to maximise likelihood under mixed Poisson-Gaussian models that describe a broad range of transmission XCT imaging systems.
X-ray computed micro-tomography systems are able to collect data with sub-micron resolution. This high-
resolution imaging has many applications but is particularly important in the study of porous materials, where
the sub-micron structure can dictate large-scale physical properties (e.g. carbonates, shales, or human bone).
Sample preparation and mounting become diffiult for these materials below 2mm diameter: consequently,
a typical ultra-micro-CT reconstruction volume (with sub-micron resolution) will be around 3k x 3k x 10k
voxels, with some reconstructions becoming much larger. In this paper, we discuss the hardware (MPI-parallel
CPU/GPU) and software (python/C++/CUDA) tools used at the ANU CTlab to reconstruct ~186 GigaVoxel
datasets.
With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy’s sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.
In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the “space-filling” source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.
Achieving sub-micron resolution in lab-based micro-tomography is challenging due to the geometric instability of the imaging hardware (spot drift, stage precision, sample motion). These instabilities manifest themselves as a distortion or motion of the radiographs relative to the expected system geometry. When the hardware instabilities are small (several microns of absolute motion), the radiograph distortions are well approximated by shift and magnification of the image. In this paper we examine the use of re-projection alignment (RA) to estimate per-radiograph motions. Our simulation results evaluate how the convergence properties of RA vary with: motion-type (smooth versus random), trajectory (helical versus space-filling) and resolution. We demonstrate that RA convergence rate and accuracy, for the space-filling trajectory, is invariant with regard to the motion-type. In addition, for the space-filling trajectory, the per-projection motions can be estimated to less than 0.25 pixel mean absolute error by performing a single quarter-resolution RA iteration followed by a single half-resolution RA iteration. The direct impact is that, for the space-filling trajectory, we need only perform one RA iteration per resolution in our iterative multi-grid reconstruction (IMGR).We also give examples of the effectiveness of RA motion correction method applied to real double-helix and space-filling trajectory micro-CT data. For double-helix Katsevich filtered-back-projection reconstruction (≈2500x2500x5000 voxels), we use a multi-resolution RA method as a pre-processing step. For the space-filling iterative reconstruction (≈2000x2000x5400 voxels), RA is applied during the IMGR iterations.
Trabecular bone and its micro-architecture are of prime importance for health. Changes of bone micro-architecture are linked to different pathological situations like osteoporosis and begin now to be understood. In a previous paper, we started to investigate the relationships between bone and vessels and we also proposed to build a Bone Atlas. This study describes how to proceed for the elaboration and use of such an atlas. Here, we restricted the Atlas to legs (tibia, femur) of rats in order to work with well known geometry of the bone micro-architecture. From only 6 acquired bone, 132 trabecular bone volumes were generated using simple mathematical morphology tools. The variety and veracity of the created micro-architecture volumes is presented in this paper. Medical application and final goal would be to determinate bone micro-architecture with some angulated radiographs (3 or 4) and to easily diagnose the bone status (healthy, pathological or healing bone...).
Direct study of pore-scale fluid displacements, and other dynamic (i.e. time-dependent) processes is not feasible with conventional X-ray micro computed tomography (μCT). We have previously verified that a priori knowledge of the underlying physics can be used to conduct high-resolution, time-resolved imaging of continuous, complex processes, at existing X-ray μCT facilities. In this paper we present a maximum a posteriori (MAP) model of the dynamic tomography problem, which allows us to easily adapt and generalise our previous dynamic μCT approach to systems with more complex underlying physics.
KEYWORDS: Radiography, Sensors, X-rays, Convolution, Signal to noise ratio, X-ray sources, 3D modeling, Data modeling, X-ray imaging, Signal attenuation
Micro scale computed tomography (CT) can resolve many features in cellular structures, bone formations, minerals properties and composite materials not seen at lower spatial-resolution. Those features enable us to build a more comprehensive model for the object of interest. CT resolution is limited by a fundamental trade off between source size and signal-to-noise ratio (SNR) for a given acquisition time. There is a limit on the X-ray flux that can be emitted from a certain source size, and fewer photons cause a lower SNR. A large source size creates penumbral blurring in the radiograph, limiting the effective spatial-resolution in the reconstruction.
High cone-angle CT improves SNR by increasing the X-ray solid angle that passes through the sample. In the high cone-angle regime current source deblurring methods break down due to incomplete modelling of the physical process. This paper presents high cone-angle source de-blurring models. We implement these models using a novel multi-slice Richardson-Lucy (M-RL) and 3D Conjugate Gradient deconvolution on experimental high cone-angle data to improve the spatial-resolution of the reconstructed volume. In M-RL, we slice the back projection volume into subsets which can be considered to have a relative uniform convolution kernel. We compare these results to those obtained from standard reconstruction techniques and current source deblurring methods (i.e. 2D Richardson-Lucy in the radiograph and the volume respectively).
In this paper, we develop a dual-energy ordered subsets convex method for transmission tomography based on material matching with a material dictionary. This reconstruction includes a constrained update forcing material characteristics of reconstructed atomic number (Z) and density (p) volumes to follow a distribution according to the material database provided. We also propose a probabilistic classification technique in order to manage this material distribution. The overall process produces a chemically segmented volume data and outperforms sequential labelling computed after tomographic reconstruction.
We address several acquisition questions that have arisen for the high cone-angle helical-scanning micro-CT facility developed at the Australian National University. These challenges are generally known in medical and industrial cone-beam scanners but can be neglected in these systems. For our large datasets, with more than 20483 voxels, minimising the number of operations (or iterations) is crucial. Large cone-angles enable high signal-to-noise ratio imaging and a large helical pitch to be used. This introduces two challenges: (i) non-uniform resolution throughout the reconstruction, (ii) over-scan beyond the region-of-interest significantly increases re- quired reconstructed volume size. Challenge (i) can be addressed by using a double-helix or lower pitch helix but both solutions slow down iterations. Challenge (ii) can also be improved by using a lower pitch helix but results in more projections slowing down iterations. This may be overcome using less projections per revolution but leads to more iterations required. Here we assume a given total time for acquisition and a given reconstruction technique (SART) and seek to identify the optimal trajectory and number of projections per revolution in order to produce the best tomogram, minimise reconstruction time required, and minimise memory requirements.
We explore the use of referenceless multi-material beam hardening correction methods, with an emphasis on
maintaining data quality for real-world imaging of geologic materials with a view towards automation. In
particular, we consider cases where the sample of interest is surrounded by a container of uniform material and
propose a novel container-only pre-correction technique to allow automation of the segmentation process required
for such correction methods. The effectiveness of the new technique is demonstrated using both simulated and
experimental data.
X-ray micro computed tomography (µCT) is a method of choice for the non-destructive imaging of static 3d samples. A fundamental constraint of conventional X-ray µCT is that the sample must remain static during data acquisition. It therefore can not be directly applied to the study of dynamic (i.e. 4D) processes such as pore-scale fluid displacements in porous materials. The process must be halted whilst data acquisition occurs, devaluing the experiment (e.g. the fluid displacement rate can no longer be studied with any confidence). Recent “proof-of-concept" studies have shown that “dynamic tomography" reconstruction algorithms incorporating a priori knowledge of the underlying physics of the process, may be capable of true high-resolution, time-resolved 4D imaging of continuous, complex processes at existing X-ray µCT facilities. In this paper, we seek to establish: (i) that the a priori information used in dynamic tomography is appropriate, i.e. does not bias the algorithm towards incorrect results; and (ii) that the results of the dynamic tomography algorithm agree with those produced by conventional techniques in the limiting case of a slowly changing sample. This investigation is performed using experimental data collected at the ANU µCT facility.
We address the problem of tomographic image quality degradation due to the effects of beam hardening when using a polychromatic X-ray source. Beam hardening refers to the preferential attenuation of low-energy (or soft) X-rays resulting in a beam with a higher average energy (i.e., harder). In projection images, thin or low-Z materials appear more dense relative to thick or higher-Z materials. This misrepresentaion produces artifacts in the reconstructed image such as cupping and streaking.
Our method involves a post-acquisition software correction that applies a beam-hardening correction curve to remap the linearised projection intensities. The curve is modelled by an eighth-order polynomial and assumes an average material for the object. The process to determine the best correction curve requires precisely 8 reconstructions and re-projections of the experiment data. The best correction curve is defined as that which generates a projection set p that minimises the reprojection distance. Reprojection distance is defined as the L2 norm of the difference between p, a set of projections, and RR†p, the result after p is reconstructed and then reprojected, i.e., ║RR†p − p║2. Here R denotes the projection operator and R† is its Moore-Penrose pseudoinverse, i.e., the reconstruction operator.
This technique was designed for single-material objects and in this case the calculated curve matches that determined experimentally. However, this technique works very well for multiple-material objects where the resulting curve is a kind of average of all materials present. We show that this technique corrects for both cupping and streaking in tomographic images by including several experimental examples. Note that this correction method requires no knowledge of the X-ray spectrum or materials present and can therefore be applied to old data sets.
This paper is motivated by our groups recent move from a conventional micro-CT system with a circular source-trajectory, to that with with a helical source-trajectory. By using a helix we can now image well beyond the limiting cone-angle of 10 degrees for a circle. We routinely perform micro-CT with cone-angles greater than 50° by using the Katsevich theoretically-exact reconstruction algorithm. Imaging with such large cone-angles enables high-signal-to-noise-ratio imaging but requires the specimen to be in a very close proximity to the source. This brings about its own challenges. Here we present experimental considerations and data post-processing techniques that allow us to obtain high-fidelity high-resolution micro-CT images at extreme cone-angles.
We have constructed a helical trajectory X-ray micro-CT system which enables high-resolution tomography
within practical acquisition times. In the quest for ever-increasing resolution, lab-based X-ray micro-CT systems
are limited by the spot size of the X-ray source. Unfortunately, decreasing the spot size reduces the X-ray flux,
and therefore the signal-to-noise ratio (SNR). The reduced source flux can be offset by moving the detector closer
to the source, thereby capturing a larger solid angle of the X-ray beam. We employ a helical scanning trajectory,
accompanied by an exact reconstruction method to avoid the artifacts resulting from the use of large cone-angles
with circular trajectories. In this paper, we present some challenges which arise when adopting this approach in
a high-resolution cone-beam micro-CT system.
We present a simple, robust, and versatile solution to the problem of blurred tomographic images as a result of
imperfect geometric hardware alignment. The necessary precision for the alignment between the various components
of a tomographic instrument is in many cases technologically difficult to implement, or requires impractical
stability. Misaligned projection sets are not self-consistent and give blurred tomographic reconstructions. We
have developed an off-line software method that utilises a geometric model to parameterise the alignment, and
an algorithm for determining the alignment parameter set that gives the sharpest tomogram. It is an adaptation
of passive auto-focus methods that have been used to obtain sharp images in optical instruments for decades.
To minimise computation time, the auto-focus strategy is a multi-scale iterative technique implemented on a
selection of 2D cross-sections of the tomogram. For each cross-section, the sharpness is evaluated while scanning
over various combinations of alignment parameters. The parameter set that maximises sharpness is used to reconstruct
the 3D tomogram. To apply the corrections, the projection data are re-mapped, or the reconstruction
algorithm is modified. The entire alignment process takes less time than that of a full-scale 3D reconstruction. It
can in principle be applied to any cone or parallel beam CT with circular, helical, or more general trajectories. It
can also be applied retrospectively to archived projection data without any additional information. This concept
is fully tested and implemented for routine use in the ANU micro-CT reconstruction software suite and has made
the entire reconstruction pipeline robust and autonomous.
We present a description of our departments work flow that utilises X-ray micro-tomography in the observation and
prediction of physical properties of porous rock. These properties include fluid flow, dissolution/deposition, fracture
mapping, and mechanical processes, as well as measurement of three-dimensional (3D) morphological attributes such as
pore/grain size and shape distributions, and pore/grain connectivity. To support all these areas there is a need for well
integrated and parallel research programs in hardware development, structural description and physical property
modelling. Since we have the ability to validate simulation with physical measurement, (and vice versa), an important
part of the integration of all these techniques is calibration at every stage of the work flow. For example, we can use
high-resolution scanning electron microscopy (SEM) images to verify or improve our sophisticated segmentation
algorithm based on image grey-levels and gradients. The SEM can also be used to obtain sub-resolution porosity
information estimated from tomographic grey-levels and texture. Comparing experimental and simulated mercury
intrusion porosimetry can quantify the effective resolution of tomograms and the accuracy of segmentation. The
foundation of our calibration techniques is a robust and highly optimised 3D to 3D image-based registration method.
This enables us to compare the tomograms of successively disturbed (e.g., dissolved, fractured, cleaned, ...) specimens
with an original undisturbed state. A two-dimensional (2D) to 3D version of this algorithm allows us to register
microscope images (both SEM and quantitative electron microscopy) of prepared 2D sections of each specimen. This
can assist in giving a multimodal assessment of the specimen.
The Radon transform and its inversion are the mathematical keys that enable tomography. Radon transforms are defined for continuous objects with continuous projections at all angles in [0,π). In practice, however, we pre-filter discrete projections taken at a discrete set of angles and reconstruct a discrete object. Since we
are approximating a continuous transform, it would seem that acquiring more projections at finer projection resolutions is the path to providing better reconstructions. Alternatively, a discrete Radon transform (DRT) and its inversion can be implemented. Then the angle set and the projection resolution are discrete having been predefined by the required resolution of the tomogram. DRT projections are not necessarily evenly spaced in [0, π),
but are concentrated in directions which require more information due to the discrete square [or cubic] grid of the reconstruction space. A DRT, by design, removes the need for interpolation, speeding up the reconstruction process and gives the minimum number of projections required, reducing the acquisition time and minimizing
the required radiation dose. This paper reviews the concept of a DRT and demonstrates how they can be used to reconstruct objects from
X-ray projections more efficiently in terms of the number of projections and to enable speedier reconstruction. This idea has been studied as early as 1977 by Myron Katz. The work begun by Katz has continued and many methods using different DRT versions have been proposed for tomographic image reconstruction. Here, results using several of the prominent DRT formalisms are included to demonstrate the different techniques involved. The quality and artifact structure of the reconstructed images are compared and contrasted with that obtained using standard filtered back projection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.