Whole Slide Image (WSI) analysis plays a pivotal role in computer-aided diagnosis and disease prognosis in digital pathology. While the emergence of deep learning and self-supervised learning (SSL) techniques helps capture relevant information in WSIs, directly relying on deep features overlooks essential domain-specific information captured by traditional handcrafted features. To address this issue, we propose fusing handcrafted and deep features in the multiple instance learning (MIL) framework for WSI classification. Inspired by advancements in transformers, we propose a novel cross-attention fusion mechanism “CA-Fuse-MIL,” to learn complementary information from handcrafted and deep features. We demonstrate that Cross-Attention fusion outperforms WSI classification using either just handcrafted or deep features. On the TCGA Lung Cancer dataset, our proposed fusion technique boosts the accuracy by upto 5.21% and 1.56% over two different set of deep features baseline. We also explore a variant of CA-Fuse-MIL which utilizes multiple cross-attention layers.
KEYWORDS: Tissues, Image classification, Pathology, Color normalization, Performance modeling, Data modeling, Biological samples, Visual process modeling, Singular value decomposition
Machine learning algorithms have made strides in the classification of metastasis in tissue histopathology images. However, one of the major roadblocks faced by these algorithms is the difference in staining of tissue samples taken and scanned at different laboratories. Stain normalization works to standardize the color and intensity of stain patches to a reference image, hence bringing different laboratory stainings to a similar domain. We propose to compare different stain normalization methods in conjunction with color augmentation to evaluate the performance of combinations of techniques for binary classification of metastatic tissue slides taken from lymph nodes. We examine the accuracy, precision, recall, F1 score, and AUROC of a convolutional neural network (CNN) model trained and tested on images that have been normalized using the Macenko and Vahadane methods. Six different configurations that combine color augmentation and stain normalization were analyzed on the PatchCamelyon (PCAM) dataset consisting of over 300K images. Our analysis showed that the Macenko method of stain normalization improves model performance, and similarly, data augmentation shows general improvement, as the increased diversity amongst the data counters overfit in the model. Model accuracy with Macenko and color augmentation improved from baseline by 1.59% and F1 score for Macenko improved from baseline by 2.50%. The best performing combination was color augmentation with the Macenko method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.