Federated learning (FL) has emerged as a promising strategy for collaboratively training complicated machine learning models from different medical centers without the need of data sharing. However, the traditional FL relies on a central server to orchestrate the global model training among clients. This makes it vulnerable to the failure of the model server. Meanwhile, the model trained based on the global data property may not yield the best performance on the local data of a particular site due to the variations of data characteristics among them. To address these limitations, we proposed Gossip Mutual Learning(GML), a decentralized collaborative learning framework that employs Gossip Protocol for direct peerto-peer communication and encourages each site to optimize its local model by leveraging useful information from peers through mutual learning. On the task of tumor segmentation on PET/CT images using HECKTOR21 dataset with 223 cases from five clinical sites, we demonstrated GML could improve tumor segmentation performance in terms of Dice Similarity Coefficient (DSC) by 3.2%, 4.6% and 10.4% on site-specific testing cases as compared to three baseline methods: pooled training, FedAvg and individual training, respectively. We also showed GML has comparable generalization performance as pooled training and FedAvg when applying them on 78 cases from two out-of-sample sites where no case was used for model training. In our experimental setup, GML showcased a sixfold decrease in communication overhead compared to FedAvg, requiring only 16.67% of the total communication overhead.
Federated Learning (FL) has attracted increasing attention in medical imaging as an alternative to centralized data sharing that can leverage a large amount of data from different hospitals to improve the generalization of machine learning models. However, while FL can provide certain protection for patient privacy by retaining data in local client hospitals, data privacy could still be compromised when exchanging model parameters between local clients and servers. Meanwhile, although efficient training strategies are actively investigated, significant communication overhead remains a major challenge in FL as it requires substantial model updates between clients and servers. This becomes more prominent when more complex models, such as transformers, are introduced in medical imaging and when geographically distinct collaborators are involved in FL studies for global health problems. To this end, we proposed FeSEC, a secure and efficient FL framework, to address these two challenges. In particular, we firstly consider a sparse compression algorithm for efficient communication among the distributed hospitals, and then we ingrate the homomorphic encryption with differential privacy to secure data privacy during model exchanges. Experiments on the task of COVID-19 detection show the proposed FeSEC substantially improves the accuracy and privacy preservation of FL models compared to FedAvg with less than 10% of communication cost.
The growing demand for radiation therapy to treat cancer has been directed to focus on improving treatment planning flow for patients. Accurate dose prediction, therefore, plays a prominent role in this regard. In this study, we propose a framework based on our newly developed scale attention networks (SA-Net) to attain voxel‐wise dose prediction. Our network ‘s dynamic scale attention model incorporates low-level details with high-level semantics from feature maps at different scales. To achieve more accurate results, we used distance data between each local voxel and the organ surfaces instead of binary masks of organs at risk as well as CT image as input of the network. The proposed method is tested on prostate cancer treated with Volumetric Modulated Arc Therapy (VMAT), where the model was training with 120 cases and tested on 20 cases. The average dose difference between the predicted dose and the clinical planned dose was 0.94 Gy, which is equivalent to 2.1% as compared to the prescription dose of 45 Gy. We also compared the performance of SA-Net dose prediction framework with different input format, the signed distance map vs. binary mask and showed the signed distance map was a better format as input to the model training. These findings show that our deep learning-based strategy of dose prediction is effectively feasible for automating the treatment planning in prostate cancer radiography.
Computerized texture analysis of mammographic images has emerged as a means to characterize breast parenchyma
and estimate breast percentage density, and thus, to ultimately assess the risk of developing breast cancer. However,
during the digitization process, mammographic images may be modified and optimized for viewing purposes, or
mammograms may be digitized with different scanners. It is important to demonstrate how computerized texture
analysis will be affected by differences in the digital image acquisition. In this study, mammograms from 172
subjects, 30 women with the BRCA1/2 gene-mutation and 142 low-risk women, were retrospectively collected and
digitized. Contrast enhancement based on a look-up table that simulates the histogram of a mixed-density breast
was applied on very dense and very fatty breasts. Computerized texture analysis was performed on these
transformed images, and the effect of variable gain on computerized texture analysis on mammograms was
investigated. Area under the receiver operating characteristic curve (AUC) was used as a figure of merit to assess
the individual texture feature performance in the task of distinguishing between the high-risk and the low-risk
women for developing breast cancer. For those features based on coarseness measures and fractal measures, the
histogram transformation (contrast enhancement) showed little effect on the classification performance of these
features. However, as expected, for those features based on gray-scale histogram analysis, such as balance and
skewnesss, and contrast measures, large variations were observed in terms of AUC values for those features.
Understanding this effect will allow us to better assess breast cancer risk using computerized texture analysis.
KEYWORDS: Breast, Breast cancer, Magnetic resonance imaging, Image segmentation, Fuzzy logic, Lithium, Cancer, Medical imaging, 3D image processing, Mammography
Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been
recommended for high-risk women screening, however, it is still unknown how the breast parenchymal
enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI
exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume
was automatically segmented using a volume-growing based algorithm. The extracted breast volume was
classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal
kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means
clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels.
Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS
density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of
most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast
time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th
post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more
parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women
with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast
parenchymal enhancement may be associated with breast density and may be potential useful as an additional
characteristic for assessing breast cancer risk.
KEYWORDS: Mammography, Feature selection, Breast cancer, Image segmentation, Magnetic resonance imaging, Databases, Medical imaging, Computer aided diagnosis and therapy, Current controlled current source, Diagnostics
Since different imaging modalities provide complementary information
regarding the same lesion, combining information from different modalities
may increase diagnostic accuracy. In this study, we investigated the
use of computerized features of lesions imaged via both full-field
digital mammography (FFDM) and dynamic contrast-enhanced magnetic
resonance imaging (DCE-MRI) in the classification of breast lesions.
Using a manually identified lesion location, i.e. a seed point on
FFDM images or a ROI on DCE-MRI images, the computer automatically
segmented mass lesions and extracted a set of features for each lesion.
Linear stepwise feature selection was firstly performed on single
modality, yielding one feature subset for each modality. Then, these
selected features served as the input to another feature selection
procedure when extracting useful information from both modalities.
The selected features were merged by linear discriminant analysis
(LDA) into a discriminant score. Receiver operating characteristic
(ROC) analysis was used to evaluate the performance of the selected
feature subset in the task of distinguishing between malignant and
benign lesions. From a FFDM database with 321 lesions (167 malignant
and 154 benign), and a DCE-MRI database including 181 lesions
(97 malignant and 84 benign), we constructed a multi-modality
dataset with 51 lesions (29 malignant and 22 benign). With
leave-one-out-by-lesion evaluation on the multi-modality dataset,
the mammography-only features yielded an area under the ROC curve
(AUC) of 0.62 ± 0.08 and the DCE-MRI-only features yielded an AUC
of 0.94±0.03. The combination of these two modalities, which
included a spiculation feature from mammography and a kinetic feature
from DCE-MRI, yielded an AUC of 0.94. The improvement of
combining multi-modality information was statistically significant
as compared to the use of mammography only (p=0.0001). However,
we failed to show the statistically significant improvement as compared
to DCE-MRI, with the limited multi-modality dataset (p=0.22).
Identifying the corresponding image pair of a lesion is an essential
step for combining information from different views of the lesion
to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images
from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially
applied to extract mass lesions from the surrounding tissues. Then
various lesion features were automatically extracted from each of
the two views of each lesion to quantify the characteristics of margin,
shape, size, texture and context of the lesion, as well as its distance
to nipple. We employed a two-step method to select an effective subset
of features, and combined it with a BANN to obtain a discriminant
score, which yielded an estimate of the probability that the two images
are of the same physical lesion. ROC analysis was used to evaluate
the performance of the individual features and the selected feature
subset in the task of distinguishing between corresponding and non-corresponding
pairs. By using a FFDM database with 124 corresponding image pairs
and 35 non-corresponding pairs, the distance feature yielded an AUC
(area under the ROC curve) of 0.8 with leave-one-out evaluation
by lesion, and the feature subset, which includes distance feature,
lesion size and lesion contrast, yielded an AUC of 0.86. The improvement
by using multiple features was statistically significant as compared
to single feature performance. (p<0.001)
In this paper, we present a two-stage method for the segmentation of breast mass lesions on digitized mammograms. A radial gradient index (RGI) based segmentation method is first used to estimate a initial contour close to the lesion boundary location in a computationally efficient manner. Then a region-based active contour algorithm, which minimizes an energy fucntion based on the homogeneities inside and outside of the envolving coutour, is applied to refine the contour closer to the lesion boundary. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. By using a digitized screening film dababase with 96 biopy-proven, malignant lesions, we quantitatively compare this two-stage segmentation algorithm with a RGI-based method and a conventional region-growing algorithm by measuring the area similarity. At an overlap threshold of 0.30, the new method correctly segments 95% of the lesions while the prior methods delineate only 83% of the lesions. Our assessment demonstrates that the two-stage segmentation algorithm yields closer agreement with manually contoured lesion boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.