KEYWORDS: Digital breast tomosynthesis, Computer aided diagnosis and therapy, Reconstruction algorithms, 3D image processing, Computer-aided diagnosis, Detection and tracking algorithms, 3D image reconstruction, 3D image enhancement, Breast, Digital mammography, Deep learning, Convolutional neural networks, Mammography, 3D displays, Image enhancement
In a typical 2D mammography workflow scenario, a computer-aided detection (CAD) algorithm is used as a second reader producing marks for a radiologist to review. In the case of 3D digital breast tomosynthesis (DBT), the display of CAD detections at multiple reconstruction heights would lead to an increased image browsing and interpretation time. We propose an alternative approach in which an algorithm automatically identifies suspicious regions of interest from 3D reconstructed DBT slices and then merges the findings with the corresponding 2D synthetic projection image which is then reviewed. The resultant enhanced synthetic 2D image combines the benefits of a familiar 2D breast view with superior appearance of suspicious locations from 3D slices. Moreover, clicking on 2D suspicious locations brings up the display of the corresponding 3D regions in a DBT volume allowing navigation between 2D and 3D images. We explored the use of these enhanced synthetic images in a concurrent read paradigm by conducting a study with 5 readers and 30 breast exams. We observed that the introduction of the enhanced synthetic view reduced radiologist's average interpretation time by 5.4%, increased sensitivity by 6.7% and increased specificity by 15.6%.
KEYWORDS: Digital breast tomosynthesis, Reconstruction algorithms, Computer aided diagnosis and therapy, Tissues, Computer-aided diagnosis, Neural networks, Mammography, Digital mammography, Detection and tracking algorithms, Evolutionary algorithms, Deep learning, Convolutional neural networks, Breast, Image segmentation, Medical imaging
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 ± 0:040 to 0:893 ± 0:033 for suspicious ROIs; and from 0:852 ± 0:065 to 0:930 ± 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some
clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer
detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric
MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ,
automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary
segmentation system, which tested on large datasets and showed good performance. Using the results of the
pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based
on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed
LOGISMOS model contained both shape and topology information during deformation. We generated graph cost
by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution
regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset
containing 261 images to optimize the system performance and compare with a voxel classification based reference
approach. After the best parameter settings were found, the system was tested on a dataset containing another
261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG
segmentation. Running time for the system is about 15 seconds.
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape.
Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation
therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and
context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful
patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this
paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is
segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by
a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach
is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The
refinement model is based on a graph-search based framework, which contains both shape and topology information
during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and
region-specific classifier training. The proposed algorithm was developed using 261 training images and tested
on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on
the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40
seconds depending on image size and resolution.
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment:
it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for
MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate
reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of
MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform
multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired
with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the
prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized
gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and
evaluated. The components of the method, offline template learning and the localization algorithm, are described
in detail.
The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer
Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR
exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence
intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were
4.06 ± 0.33 mm and 3.10 ± 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm
provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results
demonstrate high utility of the detection method for a fully automated prostate segmentation.
In this paper, we propose a new registration method for supine and prone computed tomographic colonography scans
based on graph matching. We first formulated 3D colon registration as a graph matching problem and utilized a graph
matching algorithm based on mean field theory. During the iterative optimization process, one-to-one matching
constraints were added to the system step-by-step. Prominent matching pairs found in previous iterations are used to
guide subsequent mean field calculations. The advantage of the proposed method is that it does not require a colon
centerline for registration. We tested the algorithm on a CTC dataset of 19 patients with 19 polyps. The average
registration error of the proposed method was 4.0cm (std. 2.1cm). The 95% confidence intervals were [3.0cm, 5.0mm].
There was no significant difference between the proposed method and our previous method based on the normalized
distance along the colon centerline (p=0.1).
We have applied techniques from differential motion estimation in the context of automatic registration of medical images. This method uses optical-flow and Fourier technique for local/global registration. A six parameter affine model is used to estimate shear, rotation, scale and translation. We show the efficacy of this method with images of similar and different contrasts.
We present an automatic, multi-resolution correlation based approach for elastic image registration. The technique presented assumes no a priori information (such as landmarks or segmentation), which makes it suitable for a wide class of image registration tasks. We also present preliminary results of the technique on a variety of images.
We present LIFTPACK: a software package written in C for fast calculation of 2D biorthogonal wavelet transforms using the lifting scheme. The lifting scheme is a new approach for the construction of biorthogonal wavelets entirely in the spatial domain, i.e., independent of the Fourier transform. Constructing wavelets using lifting consists of three simple phases: the first step or lazy wavelets splits the data into two subsets, even and odd, the second step calculates the wavelet coefficients as the failure to predict the odd set based on the even, and finally the third step updates the even set using the wavelet coefficients to compute the scaling function coefficients. The predict phase ensures polynomial cancelation in the high pass and the update phase ensures preservation of moments in the low pass. By varying the order, an entire family of transforms can be built. The lifting scheme ensures fast calculation of the forward and inverse wavelet transforms that only involve FIR filters. The transform works for images of arbitrary size with correct treatment of the boundaries. Also, all computations can be done in-place.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.