KEYWORDS: Iodine, Breast, Digital breast tomosynthesis, Sensors, Mammography, Dual energy imaging, Tumors, Lead, Data acquisition, Reconstruction algorithms
Dual-Energy Contrast Enhanced Digital Breast Tomosynthesis (DE-CE-DBT) has the potential to deliver diagnostic
information for vascularized breast pathology beyond that available from screening DBT. DE-CE-DBT involves a
contrast (iodine) injection followed by a low energy (LE) and a high energy (HE) acquisitions. These undergo weighted
subtraction then a reconstruction that ideally shows only the iodinated signal. Scatter in the projection data leads to
“cupping” artifacts that can reduce the visibility and quantitative accuracy of the iodinated signal. The use of filtered
backprojection (FBP) reconstruction ameliorates these types of artifacts, but the use of FBP precludes the advantages of
iterative reconstructions. This motivates an effective and clinically practical scatter correction (SC) method for the
projection data. We propose a simple SC method, applied at each acquisition angle. It uses scatter-only data at the edge
of the image to interpolate a scatter estimate within the breast region. The interpolation has an approximately correct
spatial profile but is quantitatively inaccurate. We further correct the interpolated scatter data with the aid of easily
obtainable knowledge of SPR (scatter-to-primary ratio) at a single reference point. We validated the SC method using a
CIRS breast phantom with iodine inserts. We evaluated its efficacy in terms of SDNR and iodine quantitative accuracy.
We also applied our SC method to a patient DE-CE-DBT study and showed that the SC allowed detection of a
previously confirmed tumor at the edge of the breast. The SC method is quick to use and may be useful in a clinical
setting.
Contrast enhanced digital breast tomosynthesis can yield superior visualization of tumors relative to conventional
tomosynthesis and can provide the contrast uptake kinetics available in breast MR while maintaining a higher image
spatial resolution. Conventional dual-energy (DE) acquisition protocols for contrast enhancement at a given time point
often involve two separate continuous motion sweeps of the X-ray tube (one per energy) followed by weighted
subtraction of the HE (high energy)and LE (low energy) projection data. This subtracted data is then reconstructed.
Relative to two-sweep acquisition, interleaved acquisition suffers from a lesser degree of patient motion artifacts and
entails less time spent under uncomfortable breast compression. These advantages for DE interleaved acquisition are
reduced by subtraction artifacts due to the fact that each HE, LE acquisition pair is offset in angle for the usual case of
continuous tube motion. These subtraction artifacts propagate into the reconstruction and are present even in the absence
of patient motion. To reduce these artifacts, we advocate a strategy in which the HE and LE projection data are
separately reconstructed then undergo weighted subtraction in the reconstruction domain. We compare the SDNR of
masses in a phantom for the subtract-then-reconstruct vs. reconstruct-then-subtract strategies and evaluate each strategy
for two algorithms, FBP and SART. We also compare the interleave SDNR results with those obtained with the
conventional dual-energy double-sweep method. For interleave scans and for either algorithm the reconstruct-thensubtract
strategy yields higher SDNR than the subtract-then-reconstruct strategy. For any of the three acquisition modes,
SART reconstruction yields better SDNR than FBP reconstruction. Finally the interleave reconstruct-then-subtract
method using SART yields higher SDNR than any of the double-sweep conventional acquisitions.
KEYWORDS: Signal detection, Radon, Tolerancing, Medical imaging, Interference (communication), Lawrencium, Signal to noise ratio, Imaging systems, Image enhancement, Retina
FROC and AFROC analyses are useful in medical imaging to characterize detection performance for the case of
multiple lesions. We had previously developed1 ideal FROC and AFROC observers. Their performance is ideal in
that they maximize the area or any partial area under the FROC or AFROC curve. Such observers could be useful
in imaging system optimization or in assessing human observer efficiency. However, the performance evaluation
of these ideal observers is impractically computationally complex. We propose 3 reasonable assumptions under
which the ideal observers reduce approximately to a particular form of a scan-statistic observer. Performance
for the "scan-statistic-reduced ideal observer" can be evaluated far more rapidly albeit with slight error than
that of the originally proposed ideal observer. Through simulations, we confirm the accuracy of our approximate
ideal observers. We also compare the performance of our approximate ideal observer with that of a conventional
scan-statistic observer and show that the performance of our approximate ideal observer is significantly greater.
Detection of multiple lesions (signals) in images is a medically important task and Free-response Receiver Operating Characteristic (FROC) analyses and its variants, such as Alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis (CAD) techniques for lesion detection in medical images. In this paper we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area under the associated AFROC or FROC curve. In addition these ideal observers minimize Bayes risk for particular choices of cost constraints. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We present a performance comparison of our ideal AFROC observer versus that of a more conventional scan-statistic observer.
KEYWORDS: Collimators, Imaging systems, Tolerancing, Signal detection, Single photon emission computed tomography, Tomography, Signal attenuation, Data modeling, Sensors, Image resolution
We consider the problem of optimizing collimator characteristics for a simple emission tomographic imaging
system. We use the performance of two different ideal observers to carry out the optimization. The first ideal
observer applies to signal detection when signal location is unknown and background is variable, and the second
ideal observer (one proposed previously by our group) to the more realistic task of signal detection and localization
with signal location unknown and background variable. The two observers operate on sinogram data to deliver
scalar figures of merit AROC and ALROC, respectively. We considered three different collimators that span a
range of efficiency-resolution tradeoffs. Our central question is this: For optimizing the collimator in an emission
tomographic system, does adding a localization requirement to a detection task yield an efficiency-resolution
tradeoff that differs from that for the detection-only task? Our simulations with a simple SPECT imaging
system show that as the localization requirement becomes more stringent, the optimal collimator shifts from
a low-resolution, high efficiency version toward higher resolution, lower efficiency version. We had previously
observed such behavior for a planar pinhole imaging system. In our simulations, we used a simplified model of
tomographic imaging and a simple model for object background variability. This allowed us to avoid the severe
computational complexity associated with ideal-observer performance calculations. Thus the more realistic task
(i.e. localization included) resulted for this case in a different optimal collimator.
Tomosynthesis mammography is a potentially valuable technique for detection of breast cancer. In this simulation study, we investigate the efficacy of three different tomographic reconstruction methods, EM, SART and Backprojection, in the context of an especially difficult mammographic detection task. The task is the detection of a very low-contrast mass embedded in very dense fibro-glandular tissue - a clinically useful task for which tomosynthesis may be well suited. The project uses an anatomically realistic 3D digital breast phantom whose normal anatomic variability limits lesion conspicuity. In order to capture anatomical object variability, we generate an ensemble of phantoms, each of which comprises random instances of various breast structures. We construct medium-sized 3D breast phantoms which model random instances of ductal structures, fibrous connective tissue, Cooper's ligaments and power law structural noise for small scale object variability. Random instances of 7-8 mm irregular masses are generated by a 3D random walk algorithm and placed in very dense fibro-glandular tissue. Several other components of the breast phantom are held fixed, i.e. not randomly generated. These include the fixed breast shape and size, nipple structure, fixed lesion location, and a pectoralis muscle. We collect low-dose data using an isocentric tomosynthetic geometry at 11 angles over 50 degrees and add Poisson noise. The data is reconstructed using the three algorithms. Reconstructed slices through the center of the lesion are presented to human observers in a 2AFC (two-alternative-forced-choice) test that measures detectability by computing AUC (area under the ROC curve). The data collected in each simulation includes two sources of variability, that due to the anatomical variability of the phantom and that due to the Poisson data noise. We found that for this difficult task that the AUC value for EM (0.89) was greater than that for SART (0.83) and Backprojection (0.66).
KEYWORDS: Tolerancing, Signal detection, Tumors, Imaging systems, Medical imaging, Mammography, Data modeling, Image processing, Data processing, Performance modeling
For the 2-class detection problem (signal absent/present), the likelihood ratio is an ideal observer in that it minimizes Bayes risk for arbitrary costs and it maximizes AUC, the area under the ROC curve. The AUC-optimizing property makes it a valuable tool in imaging system optimization. If one considered a different task, namely, joint detection and localization of the signal, then it would be similarly valuable to have a decision strategy that optimized a relevant scalar figure of merit. We are interested in quantifying performance on decision tasks involving location uncertainty using the LROC methodology. We derive decision strategies that maximize the area under the LROC curve, ALROC. We show that these decision strategies minimize Bayes risk under certain reasonable cost constraints. We model the detection-localization task as a decision problem in three increasingly realistic ways. In the first two models, we treat location as a discrete parameter having finitely many values resulting in an (L+1) class classification problem. In our first simple model, we do not include search tolerance effects and in the second, more general, model, we do. In the third and most general model, we treat location as a continuous parameter and also include search tolerance effects. In all cases, the essential proof that the observer maximizes ALROC is obtained with a modified version of the Neyman-Pearson lemma using Lagrange multiplier methods. A separate form of proof is used to show that in all three cases, the decision strategy minimizes the Bayes risk under certain reasonable cost constraints.
KEYWORDS: Lab on a chip, Signal detection, Sensors, Reconstruction algorithms, Tomography, Monte Carlo methods, Detection theory, Prototyping, Performance modeling, Data modeling
Detection and localization performance with signal location uncertainty may be summarized by Figures of Merit (FOM's) obtained from the LROC curve. We consider model observers that may be used to compute the two LROC FOM's: ALROC and PCL, for emission tomographic MAP reconstruction. We address the case background-known-exactly (BKE) and signal known except for location. Model observers may be used, for instance, to rapidly prototype studies that use human observers. Our FOM calculation is an ensemble method (no samples of reconstructions needed) that makes use of theoretical expressions for the mean and covariance of the reconstruction. An affine local observer computes a response at each location, and the maximum of these is used as the global observer - the response needed by the LROC curve. In previous work, we had assumed the local observers to be independent and normally distributed, which allowed the use of closed form expressions to compute the FOM's. Here, we relax the independence assumption and make the approximation that the local observer responses are jointly normal. We demonstrate a fast theoretical method to compute the mean and covariance of this joint distribution (for the signal absent and present cases) given the theoretical expressions for the reconstruction mean and covariance. We can then generate samples from this joint distribution and rapidly (since no reconstructions need be computed) compute the LROC FOM's. We validate the results of the procedure by comparison to FOM's obtained using a gold-standard Monte Carlo method employing a large set of reconstructed noise trials.
We investigate a new, provably convergent OSEM-like (ordered-subsets expectation-maximization) reconstruction algorithm for emission tomography. The new algorithm, which we term C-OSEM (complete-data OSEM), can be shown to monotonically increase the log-likelihood at each iteration. The familiar ML-EM reconstruction algorithm for emission tomography can be derived in a novel way. One may write a single objective function with complete, incomplete data and the reconstruction variables as in the EM approach. But in the objective function approach, there is no E-step. Instead, a suitable alternating descent on the complete data and then the reconstruction variables results in two update equations that can be shown to be equivalent to the familiar EM algorithm. Hence, minimizing this objective becomes equivalent to maximizing the likelihood. We derive our C-OSEM algorithm by modifying the above approach to update the complete data only along ordered subsets. The resulting update equation is quite different from OSEM, but still retains the speed-enhancing feature of the updates due to the limited backprojection facilitated by the ordered subsets. Despite this modification, we are able to show that the objective function decreases at each iteration, and (given a few more mild assumptions regarding the number of fixed points) conclude that the C-OSEM algorithm provides a monotonic convergence toward the maximum likelihood solution. We simulated noisy and noiseless emission projection data, and reconstructed them using the ML-EM, and the proposed C-OSEM with 4 subsets. We also reconstruct the data using the OSEM method. Anecdotal results show that the C-OSEM algorithm is much faster than ML-EM though slower than OSEM.
We previously introduced a new Bayesian reconstruction method for transmission tomographic reconstruction that is useful in attenuation correction in SPECT and PET. To make it practical, we apply a deterministic annealing algorithm to the method in order to avoid the dependence of the MAP estimate on the initial conditions. The Bayesian reconstruction method used a novel pointwise prior in the form of a mixture of gamma distributions. The prior models the object as comprising voxels whose values (attenuation coefficients) cluster into a few classes (e.g. soft tissue, lung, bone). This model is particularly applicable to transmission tomography since the attenuation map is usually well-clustered and the approximate values of attenuation coefficients in each region are known. The algorithm is implemented as two alternating procedures, a regularized likelihood reconstruction and a mixture parameter estimation. The Bayesian reconstruction algorithm can be effective, but has the problem of sensitivity to initial conditions since the overall objective is non-convex. To make it more practical, it is important to avoid such dependence on initial conditions. Here, we implement a deterministic annealing (DA) procedure on the alternating algorithm. We present the Bayesian reconstructions with/out DA and show the independence of initial conditions with DA.
We seek to optimize a SPECT brain-imaging system for the task of detecting a small tumor located at random in the brain. To do so, we have created a computer model. The model includes three-dimensional, digital brain phantoms which can be quickly modified to simulate multiple patients. The phantoms are then projected geometrically through multiple pinholes. Our figure of merit is the Hotelling trace, a measure of detectability by the ideal linear observer. The Hotelling trace allows us to quantitatively measure a system's ability to perform a specific task. Because the Hotelling trace requires a large number of samples, we reduce the dimensionality of our images using Laguerre-Gauss functions as channels. To illustrate our method, we compare a system built from small high-resolution cameras to one utilizing larger, low-resolution cameras.
Maximum a posteriori approaches in the context of a Bayesian framework have played an important role in SPECT reconstruction. The major advantages of these approaches include not only the capability of modeling the character of the data in a natural way but also the allowance of the incorporation of a priori information. Here, we show that a simple modification of the conventional smoothing prior, such as the membrane prior, to one less sensitive to variations in first spatial derivatives - the thin plate (TP) prior - yields improved reconstructions in the sensor of low bias at little change in variance. Although the nonquadratic priors, such as the weak membrane and the weak plate, can exhibit good performance, they suffer difficulties in optimization and hyperparameter estimation. On the other hand, the thin plate, which is a quadratic prior, leads to easier optimization and hyperparameter estimation. In this work, we evaluate and compare quantitative performance of MM, TP, and FBP algorithms in an ensemble sense to validate advantages of the thin plate model. We also observe and characterize the behavior of the associated hyperparameters of the prior distributions in a systematic way. To incorporate our new prior in a MAP approach, we model the prior as a Gibbs distribution and embed the optimization within a generalized expectation- maximization algorithm. For optimization for the corresponding M-step objective function, we use a version of iterated conditional mode. We show that the use of second- derivatives yields 'robustness' in both bias and variance by demonstrating that TP leads to very low bias error over a large range of smoothing parameter, while keeping a reasonable variance.
KEYWORDS: Brain, Autoregressive models, Single photon emission computed tomography, Tissues, Capillaries, Data modeling, Neuroimaging, Monte Carlo methods, Imaging systems, Radioisotopes
In the development of reconstruction algorithms in emission computed tomography (ECT), digital phantoms designed to mimic the presumed spatial distribution of radionuclide activity in a human are extensively used. Given the low spatial resolution in ECT, it is usually presumed that a crude phantom, usually with a constant activity level within an anatomically derived region, is sufficiently realistic for testing. Here, we propose that phantoms may be improved by assigning biologically realistic patterns of activity in more precisely delineated regions. Animal autoradiography is proposed as a source of realistic activity and anatomy. We discus the basics of radiopharmaceutical autoradiography and discuss aspects of using such data for a brain phantom. A few crude simulations with brain phantoms derived from animal data are shown.
Deformable models using energy minimization have proven to be useful in computer vision for segmenting complex objects based on various measures of image contrast. In this paper, we incorporate prior shape knowledge to aid boundary finding of 2D objects in an image in order to overcome problems associated with noise, missing data, and the overlap of spurious regions. The prior shape knowledge is encoded as an atlas of contours of default shapes of known objects. The atlas contributes a term in an energy function driving the segmenting contour to seek a balance between image forces and conformation to the atlas shape. The atlas itself is allowed to undergo a cost free affine transformation. An alternating algorithm is proposed to minimize the energy function and hence achieve the segmentation. First, the segmenting contour deforms slightly according to image forces, such as high gradients, as well as the atlas guidance. Then the atlas is itself updated according to the current estimate of the object boundary by deforming through an affine transform to optimally match the boundary. In this way, the atlas provides strong guidance in some regions that would otherwise be hard to segment. Some promising results on synthetic and real images are shown.
Automated segmentation of magnetic resonance (MR) brain imagery into anatomical regions is a complex task that appears to need contextual guidance in order to overcome problems associated with noise, missing data, and the overlap of features associated with different anatomical regions. In this work, the contextual information is provided in the form of an anatomical brain atlas. The atlas provides defaults that supplement the low-level MR image data and guide its segmentation. The matching of atlas to image data is represented by a set of deformable contours that seek compromise fits between expected model information and image data. The dynamics that deform the contours solves both a correspondence problem (which element of the deformable contour corresponds to which elements of the atlas and image data?) and a fitting problem (what is the optimal contour that corresponds to a compromise of atlas and image data while maintaining smoothness?). Some initial results on simple 2D contours are shown.
Object recognition is a complex task involving simultaneous problems in grouping, segmentation, and matching. Previous work involved an objective function formulation of the problem, resulting in a uniform method of addressing problems in object recognition that have heretofore been approached by heterogenous complex vision systems. The complexity of our objective functions resulted in numerous optimization failures, not unexpectedly. Here we propose to prime the system with estimates of the objects parameters at a coarse, more abstract, scale. We discuss how this might be done. These initial values are expected to bring the state of the system closer to good minima.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.