X-ray computed micro-tomography systems are able to collect data with sub-micron resolution. This high-
resolution imaging has many applications but is particularly important in the study of porous materials, where
the sub-micron structure can dictate large-scale physical properties (e.g. carbonates, shales, or human bone).
Sample preparation and mounting become diffiult for these materials below 2mm diameter: consequently,
a typical ultra-micro-CT reconstruction volume (with sub-micron resolution) will be around 3k x 3k x 10k
voxels, with some reconstructions becoming much larger. In this paper, we discuss the hardware (MPI-parallel
CPU/GPU) and software (python/C++/CUDA) tools used at the ANU CTlab to reconstruct ~186 GigaVoxel
datasets.
This paper is motivated by our groups recent move from a conventional micro-CT system with a circular source-trajectory, to that with with a helical source-trajectory. By using a helix we can now image well beyond the limiting cone-angle of 10 degrees for a circle. We routinely perform micro-CT with cone-angles greater than 50° by using the Katsevich theoretically-exact reconstruction algorithm. Imaging with such large cone-angles enables high-signal-to-noise-ratio imaging but requires the specimen to be in a very close proximity to the source. This brings about its own challenges. Here we present experimental considerations and data post-processing techniques that allow us to obtain high-fidelity high-resolution micro-CT images at extreme cone-angles.
X-ray micro computed tomography (µCT) is a method of choice for the non-destructive imaging of static 3d samples. A fundamental constraint of conventional X-ray µCT is that the sample must remain static during data acquisition. It therefore can not be directly applied to the study of dynamic (i.e. 4D) processes such as pore-scale fluid displacements in porous materials. The process must be halted whilst data acquisition occurs, devaluing the experiment (e.g. the fluid displacement rate can no longer be studied with any confidence). Recent “proof-of-concept" studies have shown that “dynamic tomography" reconstruction algorithms incorporating a priori knowledge of the underlying physics of the process, may be capable of true high-resolution, time-resolved 4D imaging of continuous, complex processes at existing X-ray µCT facilities. In this paper, we seek to establish: (i) that the a priori information used in dynamic tomography is appropriate, i.e. does not bias the algorithm towards incorrect results; and (ii) that the results of the dynamic tomography algorithm agree with those produced by conventional techniques in the limiting case of a slowly changing sample. This investigation is performed using experimental data collected at the ANU µCT facility.
We address the problem of tomographic image quality degradation due to the effects of beam hardening when using a polychromatic X-ray source. Beam hardening refers to the preferential attenuation of low-energy (or soft) X-rays resulting in a beam with a higher average energy (i.e., harder). In projection images, thin or low-Z materials appear more dense relative to thick or higher-Z materials. This misrepresentaion produces artifacts in the reconstructed image such as cupping and streaking.
Our method involves a post-acquisition software correction that applies a beam-hardening correction curve to remap the linearised projection intensities. The curve is modelled by an eighth-order polynomial and assumes an average material for the object. The process to determine the best correction curve requires precisely 8 reconstructions and re-projections of the experiment data. The best correction curve is defined as that which generates a projection set p that minimises the reprojection distance. Reprojection distance is defined as the L2 norm of the difference between p, a set of projections, and RR†p, the result after p is reconstructed and then reprojected, i.e., ║RR†p − p║2. Here R denotes the projection operator and R† is its Moore-Penrose pseudoinverse, i.e., the reconstruction operator.
This technique was designed for single-material objects and in this case the calculated curve matches that determined experimentally. However, this technique works very well for multiple-material objects where the resulting curve is a kind of average of all materials present. We show that this technique corrects for both cupping and streaking in tomographic images by including several experimental examples. Note that this correction method requires no knowledge of the X-ray spectrum or materials present and can therefore be applied to old data sets.
The reconstruction of images in photoacoustic tomography is reliant on specifying the speed of sound within the propagation medium. However, for in vivo imaging, this value is not normally accurately known. Here, an autofocus approach for automatically selecting the sound speed is proposed. This is based on maximizing the sharpness of the reconstructed image as quantified by a focus function. Several focus functions are investigated, and their performance is discussed. The method is demonstrated using phantom measurements made in a medium with a known sound speed and in vivo measurements of the vasculature in the flank of an adult mouse.
We have constructed a helical trajectory X-ray micro-CT system which enables high-resolution tomography
within practical acquisition times. In the quest for ever-increasing resolution, lab-based X-ray micro-CT systems
are limited by the spot size of the X-ray source. Unfortunately, decreasing the spot size reduces the X-ray flux,
and therefore the signal-to-noise ratio (SNR). The reduced source flux can be offset by moving the detector closer
to the source, thereby capturing a larger solid angle of the X-ray beam. We employ a helical scanning trajectory,
accompanied by an exact reconstruction method to avoid the artifacts resulting from the use of large cone-angles
with circular trajectories. In this paper, we present some challenges which arise when adopting this approach in
a high-resolution cone-beam micro-CT system.
We present a description of our departments work flow that utilises X-ray micro-tomography in the observation and
prediction of physical properties of porous rock. These properties include fluid flow, dissolution/deposition, fracture
mapping, and mechanical processes, as well as measurement of three-dimensional (3D) morphological attributes such as
pore/grain size and shape distributions, and pore/grain connectivity. To support all these areas there is a need for well
integrated and parallel research programs in hardware development, structural description and physical property
modelling. Since we have the ability to validate simulation with physical measurement, (and vice versa), an important
part of the integration of all these techniques is calibration at every stage of the work flow. For example, we can use
high-resolution scanning electron microscopy (SEM) images to verify or improve our sophisticated segmentation
algorithm based on image grey-levels and gradients. The SEM can also be used to obtain sub-resolution porosity
information estimated from tomographic grey-levels and texture. Comparing experimental and simulated mercury
intrusion porosimetry can quantify the effective resolution of tomograms and the accuracy of segmentation. The
foundation of our calibration techniques is a robust and highly optimised 3D to 3D image-based registration method.
This enables us to compare the tomograms of successively disturbed (e.g., dissolved, fractured, cleaned, ...) specimens
with an original undisturbed state. A two-dimensional (2D) to 3D version of this algorithm allows us to register
microscope images (both SEM and quantitative electron microscopy) of prepared 2D sections of each specimen. This
can assist in giving a multimodal assessment of the specimen.
We present a simple, robust, and versatile solution to the problem of blurred tomographic images as a result of
imperfect geometric hardware alignment. The necessary precision for the alignment between the various components
of a tomographic instrument is in many cases technologically difficult to implement, or requires impractical
stability. Misaligned projection sets are not self-consistent and give blurred tomographic reconstructions. We
have developed an off-line software method that utilises a geometric model to parameterise the alignment, and
an algorithm for determining the alignment parameter set that gives the sharpest tomogram. It is an adaptation
of passive auto-focus methods that have been used to obtain sharp images in optical instruments for decades.
To minimise computation time, the auto-focus strategy is a multi-scale iterative technique implemented on a
selection of 2D cross-sections of the tomogram. For each cross-section, the sharpness is evaluated while scanning
over various combinations of alignment parameters. The parameter set that maximises sharpness is used to reconstruct
the 3D tomogram. To apply the corrections, the projection data are re-mapped, or the reconstruction
algorithm is modified. The entire alignment process takes less time than that of a full-scale 3D reconstruction. It
can in principle be applied to any cone or parallel beam CT with circular, helical, or more general trajectories. It
can also be applied retrospectively to archived projection data without any additional information. This concept
is fully tested and implemented for routine use in the ANU micro-CT reconstruction software suite and has made
the entire reconstruction pipeline robust and autonomous.
We present a new image reconstruction method for distributed apertures operating in complex environments
with additive non-stationary noise. Our method is capable of exploiting information that we might have about:
multipath scattering in the environment; statistics of the objects to be imaged; statistics of the additive non-stationary
noise. The aperture elements are distributed spatially in an arbitrary fashion, and can be several
hundred wavelengths apart. Furthermore, our method facilitates multiple transmit apertures which operate
simultaneously, and is thus capable of handling a true multi-transmit-multi-receive scenario. We derive a set
of basis functions which is adapted to the given operating environment and sensor distribution. By selecting
an appropriate subset of these basis functions we obtain a sub-space reconstruction which is optimal in the
sense of obtaining the minimum-mean-square-error for the reconstructed image. Furthermore, as this subspace
determines which details will be visible in the reconstructed image, it provides a tool for evaluating the sensor
locations against the objects that we would like to see in the image. The implementation of our reconstruction
takes the form of a filter bank which is applied to the pulse-echo measurements. This processing can be performed
independently on the measurements obtained from each receiving element. Our approach is therefore well suited
for parallel implementation, and can be performed in a distributed manner in order to reduce the required
communication bandwidth between each receiver and the location where the results are merged into the final
image. We present numerical simulations which illustrate capabilities of our method.
We present a new receiver design for spatially distributed
apertures to detect targets in an urban environment.
A distorted-wave Born approximation is used to model the scattering
environment. We formulate the received signals at different
receive antennas in terms of the received signal at the first
antenna. The detection problem is then formulated as a binary
hypothesis test. The receiver is chosen as the optimal linear filter
that maximizes the signal-to-noise ratio (SNR) of the
corresponding test statistic. The receiver operation amounts to
correlating a transformed version of the measurement at the first
antenna with the rest of the measurements. In the
free-space case the transformation applied to the measurement from the
first
antenna reduces to a delay operator. We evaluate the performance of
the receiver on a real data set collected in a multipath- and
clutter-rich urban environment and on simulated data corresponding to a simple
multipath scene. Both the experimental and simulation results show that
the proposed receiver design offers significant improvement in
detection performance compared to conventional matched
filtering.
The idea of preconditioning transmit waveforms for optimal clutter rejection in radar imaging is presented.
Waveform preconditioning involves determining a map on the space of transmit waveforms, and then applying this
map to the waveforms before transmission. The work applies to systems with an arbitrary number of transmitand
receive-antenna elements, and makes no assumptions about the elements being co-located. Waveform
preconditioning for clutter rejection achieves efficient use of power and computational resources by distributing
power properly over a frequency band and by eliminating clutter filtering in receive processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.