We report on an optimization-based image reconstruction algorithm for contrast enhanced digital breast tomosynthesis (DBT) using dual-energy scanning. The algorithm is designed to enable quantitative imaging of Iodine-based contrast agent by mitigating the depth blur artifact. The depth blurring is controlled by exploiting gradient sparsity of the contrast agent distribution. We find that minimization of directional total variation (TV) is particularly effective at exploiting gradient sparsity for the DBT scan configuration. In this initial work, the contrast agent imaging is performed by reconstructing images from DBT data acquired at source potentials of 30- and 49-kV, followed by weighted subtraction to suppress background glandular structure and isolate the contrast agent distribution. The algorithm is applied to DBT data, acquired with a Siemens Mammomat scanner, of a structured breast phantom with Iodine contrast agent inserts. Results for both in-plane and transverse-plane imaging for directional TV minimization are presented alongside images reconstructed by filtered back-projection for reference. It is seen that directional TV is able to substantially reduce depth blur for the Iodine-based contrast agent objects.
Power-law phantoms have been useful for the assessment of imaging properties of breast imaging systems. Recent advances in 3D printing methodologies have enabled printing of 3D objects with density variations, and a physical 3D-printed power-law phantom was created. The purpose of this study was to explore the characteristics of phantom images acquired with a commercial breast tomosynthesis system, and to compare the results with prior findings using breast images. A 3D phantom was printed using PixelPrint. The texture variations in such phantoms are described by the power-law exponent beta. The design beta of the phantom model was 3.4. The printed phantom was imaged on a Hologic Selenia 3Dimensions breast tomosynthesis unit in 2D and 3D imaging modes. Power-spectrum analysis was performed on 2D and 3D images to obtain estimates of beta. Visual inspection of the images revealed grid artifacts in the phantom from the printing process. For power-law analysis, these regions were excluded by applying a mask in the Fourier domain. The observed differences of the power-law exponent in projection and tomosynthesis images (0.24) was similar to those in patient images studies (0.17 to 0.21). Power spectral analysis of a novel 3D-printed power-law phantom resulted in changes of beta in phantom images similar to what was observed in patient data. This indicates that such phantoms predict tomosynthesis image characteristics of breast images.
Dual-energy computed tomography (CT) can improve image quality relative to single-energy CT through decomposition into two or three basis materials and synthesis into virtual monoenergetic images (VMIs). Decomposition into even more materials is possible using a multi-triplet material decomposition (MMD) algorithm, in which a different set of materials (i.e. “material triplet”) is chosen for each voxel. MMD could be particularly useful for certain tasks, such as atherosclerotic plaque risk assessment. However, in its current form, MMD requires manual tuning to optimize its performance in different imaging scenarios. This work aimed to quantitatively explore the sensitivity of the MMD algorithm to CT image noise and initial VMI basis materials. We simulated 80-kVp and 140-kVp CT images of a water cylinder with four inserts (soft tissue, fat, calcium, and iodine) at twenty dose levels (0.2 to 4.0mGy). The needed input VMIs were generated using initial two-material decomposition into either soft tissue/bone or water/aluminum material pairs. VMIs were then used for MMD into basis images of the four insert materials. We found that the choice of VMI basis materials affects MMD image quality at lower doses; tissue/bone was best for all materials except fat using a dose below 1mGy. Additionally, the image quality benefit of increasing dose plateaus at a certain point, possibly due to the reduction in voxels “jumping” to different material triplets. This sensitivity analysis offers insight into the nuanced effects of MMD input variations and may be useful for jointly optimizing the quality of many basis material images with minimal patient dose.
Recent studies have proposed methods to preserve and enhance signal-detection performance in learning-based CT reconstruction with CNNs. Prior work has focused on optimizing for ideal observer (IO) or Hotelling observer (HO) performance during training. However, the performance of the IO or HO may not correlate well with the performance of human observers on the same task. In this work, we explore modified training procedures to optimize for a variety of model observers, such as the signal-Laplacian and non-prewhitening model observer with eye filter, that we hypothesize are a better proxy for a human model observer than the IO or HO. We illustrate the proposed training approach on a CNN-model used to reconstruct synthetic sparse-view breast CT data. Our results indicate that the proposed modified training allows one to preserve weak signals in the reconstructions while changing the overall noise characteristics in a way that may be beneficial to human observers.
KEYWORDS: Digital breast tomosynthesis, 3D image processing, Breast, Optical spheres, 3D acquisition, Spherical lenses, Signal detection, 3D vision, Target detection, Computed tomography
PurposeIn digital breast tomosynthesis (DBT), radiologists need to review a stack of 20 to 80 tomosynthesis images, depending upon breast size. This causes a significant increase in reading time. However, it is currently unknown whether there is a perceptual benefit to viewing a mass in the 3D tomosynthesis volume. To answer this question, this study investigated whether adjacent lesion-containing planes provide additional information that aids lesion detection for DBT-like and breast CT-like (bCT) images.MethodHuman reader detection performance was determined for low-contrast targets shown in a single tomosynthesis image at the center of the target (2D) or shown in the entire tomosynthesis image stack (3D). Using simulations, targets embedded in simulated breast backgrounds, and images were generated using a DBT-like (50 deg angular range) and a bCT-like (180 deg angular range) imaging geometry. Experiments were conducted with spherical and capsule-shaped targets. Eleven readers reviewed 1600 images in two-alternative forced-choice experiments. The area under the receiver operating characteristic curve (AUC) and reading time were computed for the 2D and 3D reading modes for the DBT and bCT imaging geometries and for both target shapes.ResultsSpherical lesion detection was higher in 2D mode than in 3D, for both DBT- and bCT-like images (DBT: AUC2D = 0.790, AUC3D = 0.735, P = 0.03; bCT: AUC2D = 0.869, AUC3D = 0.716, P < 0.05), but equivalent for capsule-shaped signals (DBT: AUC2D = 0.891, AUC3D = 0.915, P = 0.19; bCT: AUC2D = 0.854, AUC3D = 0.847, P = 0.88). Average reading time was up to 134% higher for 3D viewing (P < 0.05).ConclusionsFor the detection of low-contrast lesions, there is no inherent visual perception benefit to reviewing the entire DBT or bCT stack. The findings of this study could have implications for the development of 2D synthetic mammograms: a single synthesized 2D image designed to include all lesions present in the volume might allow readers to maintain detection performance at a significantly reduced reading time.
KEYWORDS: Signal detection, Interference (communication), Signal to noise ratio, CT reconstruction, Smoothing, Statistical modeling, Performance modeling, Imaging systems, Data modeling, Breast
Recent studies have proposed to optimize deep learning-based CT reconstruction methods for signal detectability performance. However, obtaining objective measures of signal detectability performance of the trained reconstruction networks is challenging due to the non-linear nature of the reconstruction. We propose a simple evaluation metric based on the model observer framework. The metric is based on the performance of a specific linear observer on signal-known-exactly/background-known-exactly task. The linear observer uses the signal Laplacian as a template, which we hypothesize is a better proxy for a human model observer than the ideal/Hotelling observer. We illustrate that the proposed metric can be used to select training hyper-parameters for a CNN-model used to reconstruct synthetic sparse-view breast CT data.
Fluoroscopic imaging is utilized to dynamically image a patient’s internal anatomy and physiology during an examination. Current methods for the evaluation of fluoroscopic image performance do not challenge systems in real- time or with clinically meaningful tasks. This work presents a methodology for the task-specific quantification of clinical fluoroscopy systems’ imaging performance, through reader assessments of live fluoroscopic images. First, a set of clinically relevant tasks was developed based on the internationally recognized grading scale for kidney-ureter vesicoureteric reflux (VUR) in pediatric patients. Tasks were generated to represent VUR grades from 2 to 5, and were printed using iodine ink 2D printing. Tasks were described by the total number of pages, i.e. total iodine contrast, and the VUR grade of the task itself. In total, 24 combinations of contrast and grade were assessed. Images of each task were taken under three experimental conditions: first, under a high-dose flat panel detector clinical system; second, under a low-dose protocol on the same flat panel detector system; and third, under a comparable high-dose protocol in an image intensifier clinical system. Readers assessed imaging tasks in the clinical environment in two manners: 1) detection (VUR present or absent), and 2) identification of the VUR grade. The results of the reader study indicate that after the application of a scoring scheme, a metric quantifying task-performance of fluoroscopy systems may be obtained. The evaluation process outlined in this work will enable a standard mechanism for the quantitative comparison across fluoroscopic systems, technologies, and protocols.
KEYWORDS: Signal detection, Signal attenuation, Performance modeling, Breast, Data modeling, CT reconstruction, Signal to noise ratio, Neural networks, Computed tomography, Network architectures
Deep neural networks used for reconstructing sparse-view CT data are typically trained by minimizing a pixel- wise mean-squared error or similar loss function over a set of training images. However, networks trained with such losses are prone to wipe out small, low-contrast features that are critical for screening and diagnosis. To remedy this issue, we introduce a novel training loss inspired by the model observer framework to enhance the detectability of weak signals in the reconstructions. We evaluate our approach on the reconstruction of synthetic sparse-view breast CT data, and demonstrate an improvement in signal detectability with the proposed loss.
In this work, we focus on developing a channelized Hotelling observer (CHO) that estimates ideal linear observer performance on signal detection in images resulting from non-linear image reconstruction in computed tomography. In particular, many options on specifying the channel functions are explored. A hybrid channel model is proposed where a set of traditional Laguerre-Gauss functions are concatenated with a set of central pixel functions. This expanded channel set allows the CHO to perform robustly over a wide range of image reconstruction and system parameters. The application of this model observer to determining of the total-variation constrained least-squares algorithm yields images that are seen to favor detection of small, subtle signals.
The purpose of this work is to propose a framework that could help to accelerate the development of task models and figures of merit for fluoroscopy applications. Our final goal is to use this framework to establish an imaging task based on pediatric vesicoureteral reflux (VUR) diagnosis, and to assess a reader study design that mimics contrast medium uptake. The proposed framework is based on task and observer study fine-tuning after consecutive virtual trials. Radiographs of neonates were selected by a radiologist for phantom and observer study development. Ureter depictions of five VUR grades were segmented from published references and used as imaging tasks. A tool to simulate patient+task images was developed based on well-known x-ray imaging models. To validate this tool, two quality assurance phantoms were simulated and compared to actual acquisitions, having as result a good agreement in terms of maximum resolvable line-pair frequency and contrast resolution. In addition, the noise texture and magnitude were very similar. To facilitate virtual trials, a web-based application was developed, which displays simulated images and asks the observer to grade them. Preliminary tests have shown that the application is practical, accessible and provides the needed flexibility for testing different study designs. In conclusion, a framework to facilitate phantom profiling and observer study design has been developed. With this framework it has been possible to simulate and score pediatric VUR diagnostic tasks embedded in realistic anatomical backgrounds, with the goal of developing a study design that can be performed in real time.
Given the wide variety of CT reconstruction algorithms currently available { from filtered back projection, to non- linear iterative algorithms, and now even deep learning approaches { there is a pressing need for reconstruction quality metrics that correlate well with task-specific goals. For detection tasks, metrics based on a model observer framework are an attractive option. In this framework, a reconstruction algorithm is assessed based on how well a statistically optimal "model observer" performs on a signal present/signal absent detection task. However, computing exact model observers requires a detailed description of the statistics of the reconstructed images, which are often unknown or computationally intractable to obtain, especially in the case of non-linear reconstruction algorithms. Instead, we study the feasibility of using supervised machine learning approaches to approximate model observers in a CT reconstruction setting. In particular, we show that we can well-approximate the Hotelling observer, i.e., the optimal linear classifier, for a signal-known-exactly/background-known-exactly task by training from labeled training images in the case of FBP reconstruction. We also investigate the feasibility of training multi-layer neural networks to approximate the ideal observer in the case of total variation constrained iterative reconstruction. Our results demonstrate that supervised machine learning methods achieve close to ideal performance in both cases.
KEYWORDS: Breast, 3D image processing, Signal detection, 3D modeling, Tissues, Image quality, 3D vision, 3D displays, Computer simulations, Spherical lenses
We investigate whether humans need to be shown the entire image stack (3D) or only the central slice (2D) of the lesion of breast tomosynthesis images in signal-known-exactly detection experiments. A directional small-scale breast tissue model based on random power-law noise was used. Assuming a breast tomosynthesis geometry, the tissue volumes were projected and reconstructed forming volumes-of-interest (VOI)s. Three different sizes of spheres with blurred edges were used to simulate lesions. The spheres were added on the VOIs to represent signal-present VOIs. Signal-present and signalabsent VOIs were presented during 2-alternative forced-choice experiments to 5 human observers in two modes; (i) 3D mode, in which all slices of the VOI were repeatedly displayed in ciné mode; and in (ii) 2D mode, in which only the central slice of the reconstructed VOI (where the signal-present VOIs contained the center of the spherical lesion) was displayed using 2-alternative forced-choice experiments. Percent correct (PC) of the detection performance of all observers was evaluated. No significant differences were found systematically in the PC for the 3D and 2D image viewing for this type of backgrounds. We plan to investigate these further, along with the development of a model observer that correlates well with human performance in tomosynthesis.
Designing image reconstruction algorithms for Digital Breast Tomosynthesis (DBT) has attracted much attention in recent years as the modality is increasingly employed for mammographic screening. While much recent research on this has focused on iterative image reconstruction, there may still be fundamental aspects of DBT image quality that can be addressed and improved with analytic filtered back-projection (FBP). In particular, we have been investigating conspicuity of fiber-like signals that can model blood vessels, ligaments, or spiculations. The latter structures can indicate a malignant tumor. It is known that the visual appearance of fiber-like signals varies with fiber orientation in the DBT slice images, and recently we have sought to quantify this orientation dependence with simulations involving phantoms with fibers placed at various angles with respect to the direction of the X-ray source travel (DXST). Employing FBP with a standard Hanning filter results in a marked decrease in conspicuity of fibers aligned nearly parallel with the DXST. Employing DBT-specific FBP filters proposed in the literature recovers conspicuity of such fibers. In this work, we propose a different modification to the FBP filter where the standard Hanning filter is combined with the same filter – but rotated 90° – forming a 2D filter. We illustrate the potential advantages of this new FBP filter design for DBT image reconstruction.
Flat panel detectors remain a new and emerging technology in under-table fluoroscopy systems. This technology is more susceptible than image intensifiers to electronic noise, which degrades image contrast resolution. Compensation for increased electronic noise is provided through proprietary vendor image processing algorithms. Lacking optimization in pediatrics, these algorithms interfere with patient anatomy particularly in neonate patients with low native anatomic contrast from bony structures, which serve as landmarks during fluoroscopic procedures. Existing phantoms do not adequately mimic the neonate anatomy making assessment and optimization of image quality for these patients difficult if not impossible. This work presents a method to inexpensively print iodine based anthropomorphic phantoms derived from patient radiographs with sufficient anatomic detail to assess system image quality. First, the attenuation of iodine ink densities (μt) was correlated to a standard pixel value grayscale map. Next, for proof-of-principle, radiographs of an anthropomorphic chest phantom were developed into a series of iodine ink printed sheets. Sheets were stacked to build a compact 2D phantom matching the x-ray attenuation of the original radiographs. The iodine ink printed phantom was imaged and attenuation values per anatomical regions of interest were compared. This study provides the fundamentals and techniques of phantom construction, enabling generation of anatomically realistic phantoms for a variety of patient age and size groups by use of clinical radiographs. Future studies will apply these techniques to generate neonatal phantoms from radiographs. These phantoms provide realistic imaging challenges to enable optimization of image quality in fluoroscopy and other projection-based x-ray modalities.
Digital Breast Tomosynthesis (DBT) is an emerging semi-tomographic modality that is gaining widespread popularity for mammographic screening. As the modality has only recently, in the last 10 years, been employed in the clinic, there is much variation amongst the vendors in DBT scan configuration and, accordingly, image reconstruction algorithm. In recent research there has been interest in developing iterative image reconstruction (IIR) based on gradient-sparsity regularization and for inclusion of physical modeling of detector response and noise properties. Due to the various motivations in designing IIR algorithms, there can be a great variety of optimization problems of interest. In this work, we employ a general optimization problem form where the objective function is a convex data discrepancy term and all other aspects of the imaging model are formulated as convex constraints. This general form of optimization can be efficiently solved using the primal-dual algorithm developed by Chambolle and Pock. We use the general optimization formulation together with this solver to prototype alternate imaging models for DBT; a least-squares data discrepancy with a modified total variation (TV) constraint is shown to be of particular interest in preliminary results.
Fiber-like features are an important aspect of breast imaging. Vessels and ducts are present in all breast images, and spiculations radiating from a mass can indicate malignancy. Accordingly, fiber objects are one of the three types of signals used in the American College of Radiology digital mammography (ACR-DM) accreditation phantom. Our work focuses on the image properties of fiber-like structures in digital breast tomosynthesis (DBT) and how image reconstruction can affect their appearance. The impact of DBT image reconstruction algorithm and regularization strength on the conspicuity of fiber-like signals of various orientations is investigated in simulation. A metric is developed to characterize this orientation dependence and allow for quantitative comparison of algorithms and associated parameters in the context of imaging fiber signals. The imaging properties of fibers, characterized in simulation, are then demonstrated in detail with physical DBT data of the ACR-DM phantom. The characterization of imaging of fiber signals is used to explain features of an actual clinical DBT case. For the algorithms investigated, at low regularization setting, the results show a striking variation in conspicuity as a function of orientation in the viewing plane. In particular, the conspicuity of fibers nearly aligned with the plane of the x-ray source trajectory is decreased relative to more obliquely oriented fibers. Increasing regularization strength mitigates this orientation dependence at the cost of increasing depth blur of these structures.
We characterize the detectability of fiber-like signals in digital breast tomosynthesis (DBT) for linear iterative image reconstruction (IIR) algorithms. The detectability is investigated as a function of signal orientation and IIR regularization strength. The detectability is computed with a region-of-interest (ROI) Hotelling observer (HO) and applied to two linear IIR algorithms. Trends in detectability are compared with conspicuity of signals reconstructed in both simulation and real data studies. A common trend is observed with both algorithms in which signals oriented parallel to the detector and the plane containing the source-trajectory have lower detectability than their orthogonal counterparts at low regularization strengths. The orientation dependence is gradually reduced with increasing regularization strength. These trends in detectability are seen to match well with trends in the conspicuity of reconstructed signals in both simulation and real data studies.
KEYWORDS: Breast, Image segmentation, Digital breast tomosynthesis, Digital x-ray imaging, X-ray imaging, X-rays, Sensors, Signal detection, Computer simulations, Yield improvement
In digital breast tomosynthesis (DBT), the reconstruction is calculated from x-ray projection images acquired over a small range of angles. One step in the reconstruction process is to identify the pixels that fall outside the shadow of the breast, to segment the breast from the background (air). In each projection, rays are back-projected from these pixels to the focal spot. All voxels along these rays are identified as air. By combining these results over all projections, a breast outline can be determined for the reconstruction. This paper quantifies the accuracy of this breast segmentation strategy in DBT. In this study, a physical phantom modeling a breast under compression was analyzed with a prototype next-generation tomosynthesis (NGT) system described in previous work. Multiple wires were wrapped around the phantom. Since the wires are thin and high contrast, their exact location can be determined from the reconstruction. Breast parenchyma was portrayed outside the outline defined by the wires. Specifically, the size of the phantom was overestimated along the posteroanterior (PA) direction; i.e., perpendicular to the plane of conventional source motion. To analyze how the acquisition geometry affects the accuracy of the breast outline segmentation, a computational phantom was also simulated. The simulation identified two ways to improve the segmentation accuracy; either by increasing the angular range of source motion laterally or by increasing the range in the PA direction. The latter approach is a unique feature of the NGT design; the advantage of this approach was validated with our prototype system.
For computed tomography (CT) imaging, it is important that the imaging protocols be optimized so that the scan is performed at the lowest dose that yields diagnostic images in order to minimize patients’ exposure to ionizing radiation. To accomplish this, it is important to verify that image quality of the acquired scan is sufficient for the diagnostic task at hand. Since the image quality strongly depends on both the characteristics of the patient as well as the imager, both of which are highly variable, using simplistic parameters like noise to determine the quality threshold is challenging. In this work, we apply deep learning using convolutional neural network (CNN) to predict whether CT scans meet the minimal image quality threshold for diagnosis. The dataset consists of 74 cases of high resolution axial CT scans acquired for the diagnosis of interstitial lung disease. The quality of the images is rated by a radiologist. While the number of cases is relatively small for deep learning tasks, each case consists of more than 200 slices, comprising a total of 21,257 images. The deep learning involves fine-tuning of a pre-trained VGG19 network, which results in an accuracy of 0.76 (95% CI: 0.748 – 0.773) and an AUC of 0.78 (SE: 0.01). While the number of total images is relatively large, the result is still significantly limited by the small number of cases. Despite the limitation, this work demonstrates the potential for using deep learning to characterize the diagnostic quality of CT scans.
In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.
We proposed the neutrosophic approach for segmenting breast lesions in breast computed tomography (bCT) images. The neutrosophic set considers the nature and properties of neutrality (or indeterminacy). We considered the image noise as an indeterminate component while treating the breast lesion and other breast areas as true and false components. We iteratively smoothed and contrast-enhanced the image to reduce the noise level of the true set. We then applied one existing algorithm for bCT images, the RGI segmentation, on the resulting noise-reduced image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used 122 breast lesions (44 benign and 78 malignant) of 111 noncontrast enhanced bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the Dice coefficient. The average Dice values of the NS-RGI and RGI were 0.82 and 0.80, respectively, and their difference was statistically significant (p value = 0.004). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.80) improved over that of the RGI (AUC = 0.69, p value = 0.006).
We tested the agreement of radiologists’ rankings of different reconstructions of breast computed tomography images based on their diagnostic (classification) performance and on their subjective image quality assessments. We used 102 pathology proven cases (62 malignant, 40 benign), and an iterative image reconstruction (IIR) algorithm to obtain 24 reconstructions per case with different image appearances. Using image feature analysis, we selected 3 IIRs and 1 clinical reconstruction and 50 lesions. The reconstructions produced a range of image quality from smooth/low-noise to sharp/high-noise, which had a range in classifier performance corresponding to AUCs of 0.62 to 0.96. Six experienced Mammography Quality Standards Act (MQSA) radiologists rated the likelihood of malignancy for each lesion. We conducted an additional reader study with the same radiologists and a subset of 30 lesions. Radiologists ranked each reconstruction according to their preference. There was disagreement among the six radiologists on which reconstruction produced images with the highest diagnostic content, but they preferred the midsharp/noise image appearance over the others. However, the reconstruction they preferred most did not match with their performance. Due to these disagreements, it may be difficult to develop a single image-based model observer that is representative of a population of radiologists for this particular imaging task.
Photon counting x-ray detectors (PCD) offer a great potential for energy-resolved imaging that would allow for promising applications such as low-dose imaging, quantitative contrast-enhanced imaging, as well as spectral tissue decomposition. However, physical processes in photon counting detectors produce undesirable effects like charge sharing and pulse-pile up that can adversely affect the imaging application. Existing detector response models for photon counting detectors have mainly used either X-ray fluorescence imaging or radionuclides to calibrate their detector and estimate the model parameters. The purpose of our work was to apply one such model to our photon counting detector and to determine the model parameters from transmission measurements. This model uses a polynomial fit to model the charge sharing response and energy resolution of the detector as well as an Aluminum filter to model the modification of the spectrum by the X-ray. Our experimental setup includes a Si-based photon counting detector to generate transmission spectra from multiple materials at varying thicknesses. Materials were selected so as to exhibit k-edges within the 15-35 keV region. We find that transmission measurements can be used to successfully model the detector response. Ultimately, this approach could be used for practical detector energy calibration. A fully validated detector response model will allow for exploration of imaging applications for a given detector.
We proposed the neutrosophic approach for segmenting breast lesions in breast Computer Tomography (bCT) images. The neutrosophic set (NS) considers the nature and properties of neutrality (or indeterminacy), which is neither true nor false. We considered the image noise as an indeterminate component, while treating the breast lesion and other breast areas as true and false components. We first transformed the image into the NS domain. Each voxel in the image can be described as its membership in True, Indeterminate, and False sets. Operations α-mean, β-enhancement, and γ-plateau iteratively smooth and contrast-enhance the image to reduce the noise level of the true set. Once the true image no longer changes, we applied one existing algorithm for bCT images, the RGI segmentation, on the resulting image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used a total of 122 breast lesions (44 benign, 78 malignant) of 123 non-contrasted bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the DICE coefficient. The average DICE value of the NS-RGI was 0.82 (STD: 0.09), while that of the RGI was 0.8 (STD: 0.12). The difference between the two DICE values was statistically significant (paired t test, p-value = 0.0007). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.8) improved over that of the RGI (AUC = 0.69, p-value = 0.006).
The purpose of this study was to determine radiologists’ diagnostic performances on different image reconstruction algorithms that could be used to optimize image-based model observers. We included a total of 102 pathology proven breast computed tomography (CT) cases (62 malignant). An iterative image reconstruction (IIR) algorithm was used to obtain 24 reconstructions with different image appearance for each image. Using quantitative image feature analysis, three IIRs and one clinical reconstruction of 50 lesions (25 malignant) were selected for a reader study. The reconstructions spanned a range of smooth-low noise to sharp-high noise image appearance. The trained classifiers’ AUCs on the above reconstructions ranged from 0.61 (for smooth reconstruction) to 0.95 (for sharp reconstruction). Six experienced MQSA radiologists read 200 cases (50 lesions times 4 reconstructions) and provided the likelihood of malignancy of each lesion. Radiologists’ diagnostic performances (AUC) ranged from 0.7 to 0.89. However, there was no agreement among the six radiologists on which image appearance was the best, in terms of radiologists’ having the highest diagnostic performances. Specifically, two radiologists indicated sharper image appearance was diagnostically superior, another two radiologists indicated smoother image appearance was diagnostically superior, and another two radiologists indicated all image appearances were diagnostically similar to each other. Due to the poor agreement among radiologists on the diagnostic ranking of images, it may not be possible to develop a model observer for this particular imaging task.
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
We report on the development of silicon strip detectors for energy-resolved clinical mammography. Typically, X-ray integrating detectors based on scintillating cesium iodide CsI(Tl) or amorphous selenium (a-Se) are used in most commercial systems. Recently, mammography instrumentation has been introduced based on photon counting Si strip detectors. The required performance for mammography in terms of the output count rate, spatial resolution, and dynamic range must be obtained with sufficient field of view for the application, thus requiring the tiling of pixel arrays and particular scanning techniques. Room temperature Si strip detector, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel, provided that the sensors are designed for rapid signal formation across the X-ray energy ranges of the application. We present our methods and results from the optimization of Si-strip detectors for contrast enhanced spectral mammography. We describe the method being developed for quantifying iodine contrast using the energy-resolved detector with fixed thresholds. We demonstrate the feasibility of the method by scanning an iodine phantom with clinically relevant contrast levels.
KEYWORDS: Digital breast tomosynthesis, Optical filters, Image restoration, Image resolution, Tissues, Digital filtering, Mammography, Matrices, Image filtering, Data modeling
Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.
Image reconstruction algorithms for breast CT must deal with truncated projections and high noise levels. Recently, we have been investigating a design of iterative image reconstruction algorithms that employ a differentiation filter on the projection data and estimated projections. The extra processing step can potentially reduce the impact of artifacts due to projection truncation in addition to enhancing edges in the reconstructed volumes. The edge enhancement can improve visibility of various tissue structures. Previously, this idea has been incorporated in an approximate solver of the associated optimization problem. In the present work, we present reconstructed volumes with clinical breast CT data, which result from accurate solution of this optmization problem. Furthermore, we employ singular value decomposition (SVD) to help determine filter parameters and to interpret the properties of the reconstructed volumes.
Evaluation of segmentation algorithms usually involves comparisons of segmentations to gold-standard delineations without regard to the ultimate medical decision-making task. We compare two segmentation evaluations methods—a Dice similarity coefficient (DSC) evaluation and a diagnostic classification task–based evaluation method using lesions from breast computed tomography. In our investigation, we use results from two previously developed lesion-segmentation algorithms [a global active contour model (GAC) and a global with local aspects active contour model]. Although similar DSC values were obtained (0.80 versus 0.77), we show that the global + local active contour (GLAC) model, as compared with the GAC model, is able to yield significantly improved classification performance in terms of area under the receivers operating characteristic (ROC) curve in the task of distinguishing malignant from benign lesions. [Area under the ROC curve (AUC)=0.78 compared to 0.63, p≪0.001]. This is mainly because the GLAC model yields better detailed information required in the calculation of morphological features. Based on our findings, we conclude that the DSC metric alone is not sufficient for evaluating segmentation lesions in computer-aided diagnosis tasks.
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.
We present and evaluate a method for the three-dimensional (3-D) segmentation of breast masses on dedicated breast computed tomography (bCT) and automated 3-D breast ultrasound images. The segmentation method, refined from our previous segmentation method for masses on contrast-enhanced bCT, includes two steps: (1) initial contour estimation and (2) active contour-based segmentation to further evolve and refine the initial contour by adding a local energy term to the level-set equation. Segmentation performance was assessed in terms of Dice coefficients (DICE) for 129 lesions on noncontrast bCT, 38 lesions on contrast-enhanced bCT, and 98 lesions on 3-D breast ultrasound (US) images. For bCT, DICE values of 0.82 and 0.80 were obtained on contrast-enhanced and noncontrast images, respectively. The improvement in segmentation performance with respect to that of our previous method was statistically significant (p=0.002). Moreover, segmentation appeared robust with respect to the presence of glandular tissue. For 3-D breast US, the DICE value was 0.71. Hence, our method obtained promising results for both 3-D imaging modalities, laying a solid foundation for further quantitative image analysis and potential future expansion to other 3-D imaging modalities.
KEYWORDS: Digital breast tomosynthesis, Reconstruction algorithms, Tissues, Image restoration, X-rays, Image filtering, Data modeling, Image enhancement, Tomography, Algorithm development
We design an iterative image reconstruction (IIR) algorithm for enhancing tissue structure contrast. The algorithm takes advantage of a data fidelity term, which compares the derivative of the DBT projections with the derivative of the estimated projections. This derivative data fidelity is sensitive to the edges of tissue structure projections, and as a consequence minimizing the corresponding the data-error term brings out structure information in the reconstructed volumes. The method has the practical advantages that few iterations are required and that direct region-of-interest (ROI) reconstruction is possible with the proposed derivative data fidelity term. Both of these advantages reduce the computational burden of the IIR algorithm and potentially make it feasible for clinical application. The algorithm is demonstrated on clinical DBT data.
Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 ± 0.17, 0.57 ± 0.18, and 0.58 ± 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed “acceptable”. Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 ± 0.17 and 0.56 ± 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.
Tomosynthesis produces three-dimensional images of an object, with non-isotropic resolution.
Tomosynthesis images are typically read by human observers in a stack viewing mode, displaying
planes through the tomosynthesis volume. The purpose of this study was to investigate whether
human performance in a signal-known exactly (SKE) detection task improves when the entire
tomosynthesis volume is available to the observer, compared to displaying a single plane through
the signal center. The goal of this study was to improve understanding of human performance
in order to aid development of observer models for tomosynthesis.
Human performance was measured using sequential 2-alternative forced choice experiments.
In each trial, the observer was first asked to select the signal-present ROI based on a single
2D tomosynthesis plane. Then, scrolling was enabled and the observer was able to select the
signal-present ROI, based on knowledge of the entire volume. The number of correct decisions
for 2D and 3D viewing was recorded, and the number of trials was recorded for which a score
increase or decrease occured between 2D and 3D readings.
Test images consisted of tomosynthesis reconstructions of simulated breast tissue, where
breast tissue was modeled as binarized power-law noise. Tomosynthesis reconstructions of designer
nodules of r = 250μm, r = 1mm, r = 4mm were added to the structured backgrounds.
For each signal size, observers scored 256 trials with signal amplitude set so that the proportion
of correct answers in the single slice was 90%.
For two observers, a slight increase in performance was found when adjacent tomosynthesis
slices were displayed, for the two larger signals. Statistical significance could not be established.
The number of decision changes was analyzed for each observer. For these two observers,
the number of decision changes that led to a score increase or decrease were outside the 95%
confidence interval of the decision change being random, indicating that for these two observers,
displaying the tomosynthesis stack did boost performance. For the other two observers, decision
changes that increased or decreased the score were within the 95% confidence interval of guessing,
indicating that the decision changes were due to a satisfaction of search effect.
However the results also indicate that the performance increase is small and the majority of
information appears to be contained in the tomosynthesis slice that corresponds to the center
of the lesions.
Dedicated breast CT (bCT) is an emerging technology that produces 3D images of the breast, thus allowing radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in the bCT volume can prove time consuming and difficult. Thus, we are developing automated 3D lesion segmentation methods to aid in the interpretation of bCT images. Based on previous studies using a 3D radial-gradient index (RGI) method [1], we are investigating whether 3D active contour segmentation can be applied in 3D to capture additional details of the lesion margin.
Our data set includes 40 contract-enhanced bCT scans. Based on a radiologist-marked lesion center of each mass, an initial RGI contour is obtained that serves as the input to an active contour segmentation method. In this study, active contour level set segmentation, an iterative segmentation technique, is extended to 3D. Three stopping criteria are compared, based on 1) the change of volume (ΔV/V), 2) the mean value of the increased volume at each iteratin (dμ/dt), and 3) the changing rate of intensity inside and outside the lesion (Δvw).
Lesion segmentation was evaluated by determining the overlap ratio between computer-determined segmentations and manually-drawn lesion outlines. For a given lesion, the overlap ratio was averaged across coronal, sagittal, and axial planes. The average overlap ratios for the three stopping criteria were found to be 0.66 (ΔV/V), 0.68 (dμ/dt), 0.69 (Δvw).
KEYWORDS: Breast, Digital breast tomosynthesis, Radiography, Statistical analysis, Medical imaging, Radiology, Physics, Current controlled current source, Image segmentation, Optical inspection
Normal mammographic backgrounds have power spectra that can be described using a power law
P(f) = c/fβ, where β ranges from 1.5 to 4.5. Anatomic noise can be the dominant noise source
in a radiograph. Many researchers are characterizing anatomic noise by β, which can be measured
from an image. We investigated the effect of sampling distance, offset, and region of interest (ROI)
size on β. We calculated β for tomosynthesis projection view and reconstructed images, and we
found that ROI size affects the value of β. We evaluated four different square ROI sizes (1.28, 2.56,
3.2, and 5.12 cm), and we found that the larger ROI sizes yielded larger β values in the projection
images.
The β values change rapidly across a single projection view; however, despite the variation
across the breast, different sampling schemes (which include a variety of sampling distances and
offsets) produced average β values with less than 5% variation. The particular location and number
of samples used to calculate β does not matter as long as the whole image is covered, but the size
of the ROI must be chosen carefully.
KEYWORDS: Image segmentation, Digital breast tomosynthesis, 3D image processing, Breast, Digital filtering, Image processing algorithms and systems, Computed tomography, Spherical lenses, Tissues, Image filtering
Recently, tomosynthesis (DBT) and CT (BCT) have been developed for breast imaging. Since each modality
produces a fundamentally different representation of the breast volume, our goal was to investigate whether
a 3D segmentation algorithm for breast masses could be applied to both DBT and breast BCT images. A
secondary goal of this study was to investigate a simplified method for comparing manual outlines to a computer
segmentation.
The seeded mass lesion segmentation algorithm is based on maximizing the radial gradient index (RGI) along
a constrained region contour. In DBT, the constraint function was a prolate spherical Gaussian, with a larger
FWHM along the depth direction where the resolution is low, while it was a spherical Gaussian for BCT. For
DBT, manual lesion outlines were obtained in the in-focus plane of the lesion, which was used to compute the
overlap ratio with the computer segmentation. For BCT, lesions were manually outlined in three orthogonal
planes, and the average overlap ratio from the three planes was computed.
In DBT, 81% of all lesions were segmented at an overlap ratio of 0.4 or higher, based on manual outlines in
one slice through the lesion center. In BCT, 93% of all segmentations achieved an average overlap ratio of 0.4,
based on the manual outlines in three orthogonal planes.
Our results indicate mass lesions in both BCT and DBT images can be segmented with the proposed 3D
segmentation algorithm, by selecting an appropriate set of parameters and after images have undergone specific
pre-processing.
KEYWORDS: Breast, Performance modeling, 3D modeling, Signal detection, Breast imaging, Tissues, Sensors, Data modeling, 3D image processing, Statistical modeling
Breast tomosynthesis is a novel modality for breast imaging that aims to provide partial depth resolution of the
tissue structure. In order to optimize tomosynthesis acquisition and reconstruction parameters, it is necessary
to model the structured breast background. The purpose of this work was to investigate whether filtered noise
could be used as a structure surrogate.
Human performance in a SKE detection task was determined through 2-AFC experiments using tomosynthesis
backgrounds extracted from 55 normal breasts. Mathematically defined lesions were projected and reconstructed
using the same acquisition and reconstruction parameters as the clinical data. Signal diameters were 0.05, 0.2
and 0.8 cm. Performance in the center projection as well as a reconstructed slice through the signal center
was determined. The gray-scale volume was binarized and attenuation coefficients of adipose or fibroglandular
tissue were assigned to the voxels. This volume was then projected and reconstructed and performance of a
pre-whitening observer was computed for the slice and center projections.
Human performance in clinical backgrounds was predicted by a pre-whitening observer model, both in projection
images and reconstructed slices. This indicates that pre-whitening observer performance is a good predictor
of human performance and can be used to predict human performance in the simulated backgrounds. When
comparing model observer performance in the two background types, comparable performance was found. This
indicates that structured background based on filtered noise may be useful in tomosynthesis system optimization.
In conclusion, for a SKE detection task, similar performance was reached in clinical and filtered noise backgrounds.
This indicates that detection performance based on the filtered noise background model may be used
to predict performance in actual breast backgrounds.
KEYWORDS: Sensors, Signal detection, Modulation transfer functions, X-rays, X-ray sources, Monte Carlo methods, X-ray detectors, Photons, X-ray optics, X-ray imaging
We have investigated the effect of non-isotropic blur in an indirect x-ray conversion screen in tomosynthesis
imaging. To study this effect, we have implemented a screen model for angle-dependent x-ray incidence, and
have validated the model using experimental as well as Monte-Carlo simulations reported in the literature.
We investigated detector characteristics such as MTF, NPS and DQE, and we estimated system performance
in a signal-known exactly detection task.
We found that for such a screen, the frequency dependence of the MTF varies with x-ray source angle, while
the frequency dependence of the NPS does not. Furthermore, as the x-ray source angle is increased, the DQE
becomes more narrow and DQE(f=0) grows. We found that for a tomosynthesis scan angle of 90 degrees and a
conversion screen thickness of 130 microns, detectability for small signals (radius=0.125 mm) was decreased by
13%, compared to signal radii above 0.5 mm.
The magnitude of the degradation is expected to vary for different tomosynthesis configurations, such as scan
angle and conversion screen thickness.
KEYWORDS: Sensors, Breast, Digital breast tomosynthesis, Mammography, 3D image reconstruction, Breast cancer, Tissues, 3D image processing, Computer simulations, Signal detection
Microcalcifications (MCs) are an important early sign of breast cancer. In conventional mammography, MC
detectability is limited primarily due to quantum noise. In tomosynthesis, a dose comparable to that delivered in
one projection mammogram is divided across a number of projection views (typically ranging between 10 and 30).
This potentially will reduce the detectability of MCs, if detector noise is not very low. The purpose of this study
is to explore the relationship between MC detectability in the projection views and in the reconstructed image.
The effect of angular range and number of angles on detectability will also be evaluated for an ideal detector.
Microcalcification detectability is shown to be greater in the sinogram than in the reconstructed images. Further,
the detectability is reduced when the MC is located far from the center of the breast. Also, the detectability in
the projection images is dependent on the projection angle.
KEYWORDS: Expectation maximization algorithms, Reconstruction algorithms, Digital breast tomosynthesis, Breast, Image restoration, Algorithm development, Sensors, Medical imaging, Data modeling, Tissues
Digital breast tomosynthesis (DBT) is a rapidly developing imaging modality that gives some tomographic
information for breast cancer screening. The effectiveness of standard mammography can be limited by the
presence of overlapping structures in the breast. A DBT scan, consisting of a limited number of views covering
a limited arc projecting the breast onto a fixed flat-panel detector, involves only a small modification of
digital mammography, yet DBT yields breast image slices with reduced interference from overlapping breast
tissues. We have recently developed an iterative image reconstruction algorithm for DBT based on image
total variation (TV) minimization that improves on EM in that the resulting images have fewer artifacts and
there is no need for additional regularization. In this abstract, we present the total p-norm variation (TpV)
image reconstruction algorithm. TpV has the advantages of our previous TV algorithm, while improving
substantially on the efficiency. Results for the TpV on clinical data are shown and compared with EM.
KEYWORDS: Digital breast tomosynthesis, Mammography, Computer aided diagnosis and therapy, 3D image processing, Breast cancer, Image resolution, Breast, 3D image reconstruction, Sensors
Digital breast tomosynthesis (DBT) is being proposed as a replacement for conventional mammography for breast cancer screening. However, there are limitations to DBT that reduce its effectiveness for screening, principally, difficulty in imaging microcalcifications and increased reading times by radiologists. We propose a method to overcome these limitations.
Our proposed method is to divide the total dose given to the patient unequally such that one projection uses at least half of the dose and the remaining dose is divided over the remaining projections. We assume that in screening with DBT, only a single view is obtained using twice the dose of a conventional mammogram. All the projection images are used in the reconstruction. The 2D projection image that received the highest dose is analyzed by a computer-aided detection (CADe) scheme for microcalcifications. The radiologist views the 3D image set, with mass CADe, principally to search for masses and the 2D image to search for clustered microcalcifications with CADe. Since the 3D image set is for mass detection, the image can be reconstructed using larger sized pixels. This will reduce computation time and image noise. In principle, radiologists can review the tomosynthesis slices faster since they do not have to search for microcalcifications.
We believe that by producing both a high resolution, "standard" dose 2D image and a lower resolution 3D image set, both calcifications and masses can be optimally imaged and detected in a time efficient manner.
Tomosynthesis is emerging as a promising modality for breast imaging. Several manufacturers have developed prototype
units and have acquired clinical and phantom data. Scanning configurations of these prototypes vary. So far, studies
relating scanning configuration to image quality have been limited to those geometries that could be implemented on a
particular prototype. To overcome this limitation, we are developing a model of breast tomosynthesis image acquisition
system, which models the formation of the x-ray image and x-ray detector.
The x-ray image of an object is computed analytically for a polychromatic x-ray beam. Objects consist of volumetric
regions that are bounded by either a planar, ellipsoidal, cylindrical or conical surface, allowing for a variety of objects. xray
scatter is computed by convolving the image with a scatter point-spread function. Poisson noise according to the
entrance exposure is added to the image.
The x-ray detector in this model is composed of a phosphor screen followed by a detector array. X-ray interactions in the
screen are modeled as depth-dependent. The optical output of the screen is converted into digital units using a gain factor
which was assumed to be Gaussian distributed.
To validate this data model, we acquired images of a contrast-detail phantom on a stereotactic biopsy unit. The x-ray
source is mounted on an arm that pivots in a plane about the detector center. The x-ray detector consists of a Min-R type
screen fiber-optically coupled to a CCD camera.
To compare actual and simulated data, we compared line profiles as well as several automatically extracted image
features such as contrast-to-noise ratio, contrast, area and radial gradient index. Good agreement was found between
simulation and physical data, indicating that we can now use this model to explore image quality for various
tomosynthesis scanning configurations.
The total variation (TV) minimization algorithm for image reconstruction in few-view computed
tomography is applied to image reconstruction in digital breast tomosynthesis. In the TV minimization
algorithm, there is a parameter that regulates how close the estimated data should be to
the actual projection data. The effect of this parameter on the reconstructed images is investigated.
In addition, realistic noise is added to the simulated projection data.
We conducted experiments to determine human performance in detecting
and discriminating microcalcification-like objects in mammographic
background. This study is an extension of our previous work where we investigated detection and discrimination of known objects in white noise background (SKE/BKE taks). In the present experiments, we used hybrid images, consisting of computer-generated images of three signal shapes which were added into mammographic background extracted from digitized normal mammograms.
Human performance was measured by determining percentage correct (PC)
in 2-AFC experiments for the tasks of detecting a signal or discriminating between two signal shapes. PC was converted into a detection or discrimination index d' and psychometric functions were created by plotting d' as function of square root of signal energy.
Human performance was compared to predictions of a NPWE model observer. We found that the slope of the linear portion of the psychometric function for detection was smaller than that for discrimination, as opposed to what we we observed for white noise backgrounds, where the psychometric function for detection was significantly steeper than that for discrimination. We found that human performance was qualitatively reproduced by model observer predictions.
We developed a two-stage computerized mass detection algorithm
for digital tomosynthesis images of the breast. Rather than
analyze the reconstructed 3D breast volume, our algorithm operates
on each of the 2D projection images directly. We chose this approach
because reconstruction algorithms for breast tomosynthesis are still
being optimized, which can alter the appearance of the 3D reconstructed
breast volume. Furthermore this approach allows us to take advantage
of mass detection methods already developed for conventional two-view
projection mammography, which are similar to projection images for digital
tomosynthesis. We applied our algorithm to two tomosynthesis image sets,
one of which was a computer simulated 3D breast phantom, and
one was a clinical image set. In both cases, the lesion was detected in
the first stage of the algorithm, while the second stage of the algorithm
efficiently reduced false positive detections.
KEYWORDS: Signal detection, Signal to noise ratio, Calibration, Image quality, Stars, Mammography, Interference (communication), Solids, Medical imaging, Image filtering
We investigated human efficiency in a discrimination task and compared it to human efficiency in an associated detection task.
The goal of this study was to investigate the relationship between image quality and shape discrimination in radiographic images. We conducted 2-AFC observer experiments to determine human performance and compared it to ideal observer performance in the SKE-BKE detection and discrimination tasks. We found that human efficiency was significantly lower for the discrimination task than for the detection task, and discrimination performance also depended on the actual object shape. The results support our hypothesis that the shape of individual microcalcifications in a mammogram cannot be identified reliably, unless the two microcalcification shapes in question are substantially different, such as punctate and linear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.