In this study, a mammographic mass retrieval platform was established using content-based image retrieval
method to extract and to model the semantic content of mammographic masses. Specifically, the shape and
margin of a mass was classified into different categories, which were sorted by radiologist experts according
to BI-RADS descriptors. Mass lesions were analyzed by the likelihoods of each category with defined
features including third order moments, curvature scale space descriptors, compactness, solidity, and
eccentricity, etc. To evaluate the performance of the retrieval system, we defined that a retrieved image is
considered relevant if it belongs to the same class (benign or malignant) as the query image. A total of 476
biopsy-proven mass cases (219 malignant and 257 benign) were used for 10 random test/train partitions. For
each test query mass, 5 most similar masses were retrieved from the image library. The performance of the
retrieval system was evaluated by ROC analysis of the malignancy rating of the query masses in the test set
relative to the biopsy truth. Through 10 random test/train partitions, we found that the averaged area under
the ROC curve (Az) was 0.80±0.06. With another independent dataset containing 415 cases (244 malignant
and 171 benign) as a test set, the ROC analysis indicated the performance of the retrieval system had an Az of
0.75±0.03.
A learning-based approach integrating the use of pixel level statistical modeling and spiculation detection is
presented for the segmentation of mammographic masses with ill-defined margins and spiculations. The algorithm
involves a multi-phase pixel-level classification, using a comprehensive group of regional features, to generate a
pixel level mass-conditional probability map (PM). Then, mass candidate along with background clutters are
extracted from the PM by integrating the prior knowledge of shape and location of masses. A multi-scale steerable
ridge detection algorithm is employed to detect spiculations. Finally, all the object level findings, including mass
candidate, detected spiculations, and clutters, along with the PM are integrated by graph cuts to generate the
final segmentation mask. The method was tested on 54 masses (51 malignant and 3 benign), all with ill-defined
margins and irregular shape or spiculations. The ground truth delineations were provided by five experienced
radiologists. Area overlap ratio of 0.766 (±0.144) and 0.642 (±0.173) were obtained for segmenting the whole mass
and only the margin portion, respectively. Williams index of area and contour based measurements indicated
that segmentation results of the algorithm well agreed with the radiologists' delineation. Most importantly, the
proposed approach is capable of including mass margin and its extension which are considered as key features
for breast lesion analyses.
Characterization and quantification of the severity of diffuse parenchymal lung diseases (DPLDs) using Computed
Tomography (CT) is an important issue in clinical research. Recently, several classification-based computer-aided
diagnosis (CAD) systems [1-3] for DPLD have been proposed. For some of those systems, a degradation of performance
[2] was reported on unseen data because of considerable inter-patient variances of parenchymal tissue patterns.
We believe that a CAD system of real clinical value should be robust to inter-patient variances and be able to classify
unseen cases online more effectively. In this work, we have developed a novel adaptive knowledge-driven CT image
search engine that combines offline learning aspects of classification-based CAD systems with online learning aspects of
content-based image retrieval (CBIR) systems. Our system can seamlessly and adaptively fuse offline accumulated
knowledge with online feedback, leading to an improved online performance in detecting DPLD in both accuracy and
speed aspects. Our contribution lies in: (1) newly developed 3D texture-based and morphology-based features; (2) a
multi-class offline feature selection method; and, (3) a novel image search engine framework for detecting DPLD. Very
promising results have been obtained on a small test set.
In this study, we present a clinically guided technical method for content-based categorization of mammographic masses.
Our work is motivated by the continuing effort in content-based image annotation and retrieval to extract and model the
semantic content of images. Specifically, we classified the shape and margin of mammographic mass into different
categories, which are designated by radiologists according to descriptors from Breast Imaging Reporting and Data
System Atlas (BI-RADS). Experiments were conducted within subsets selected from datasets consisting of 346 masses.
In the experiments that categorize lesion shape, we obtained a precision of 70% with three classes and 87.4% with two
classes. In the experiments that categorize margin, we obtained precisions of 69.4% and 74.7% for the use of four and
three classes, respectively. In this study, we intend to demonstrate that this classification based method is applicable in
extracting the semantic characteristics of mass appearances, and thus has the potential to be used for automatic
categorization and retrieval tasks in clinical applications.
The purpose of this study is to develop a Content-Based Image Retrieval (CBIR) system for mammographic computer-aided
diagnosis. We have investigated the potential of using shape, texture, and intensity features to categorize masses
that may lead to sorting similar image patterns in order to facilitate clinical viewing of mammographic masses.
Experiments were conducted within a database that contains 243 masses (122 benign and 121 malignant). The retrieval
performances using the individual feature was evaluated, and the best precision was determined to be 79.9% when using
the curvature scale space descriptor (CSSD). By combining several selected shape features for retrieval, the precision
was found to improve to 81.4%. By combining the shape, texture, and intensity features together, the precision was
found to improve to 82.3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.