There has been a long pursuit for precise and reproducible glomerular quantification in the field of renal pathology in both research and clinical practice. Currently, 3D glomerular identification and reconstruction of large-scale glomeruli are labor-intensive tasks, and time-consuming by manual analysis on whole slide imaging (WSI) in 2D serial sectioning representation. The accuracy of serial section analysis is also limited in the 2D serial context. Moreover, there are no approaches to present 3D glomerular visualization for human examination (volume calculation, 3D phenotype analysis, etc.). In this paper, we introduce an end-to-end holistic deep-learning-based method that achieves automatic detection, segmentation and multi-object tracking (MOT) of individual glomeruli with large-scale glomerular-registered assessment in a 3D context on WSIs. The high-resolution WSIs are the inputs, while the outputs are the 3D glomerular reconstruction and volume estimation. This pipeline achieves 81.8 in IDF1 and 69.1 in MOTA as MOT performance, while the proposed volume estimation achieves 0.84 Spearman correlation coefficient with manual annotation. The end-to-end MAP3D+ pipeline provides an approach for extensive 3D glomerular reconstruction and volume quantification from 2D serial section WSIs.
Glomeruli are clusters of capillaries that are responsible for filtering the blood to form urine, thus excreting waste and maintaining fluid and acid-base balance. The detection and characterization of glomeruli are key elements in diagnostic and experimental nephropathology. Although the field of machine vision has already advanced the detection, classification, and prognostication of diseases in the specialties of radiology and oncology, renal pathology is just entering the digital imaging era. However, developing quantitative machine learning approaches (e.g., self-supervised deep learning) that characterize glomerular lesions (e.g., global glomerulosclerosis (GGS)) from whole slide images (WSIs) typically requires large-scale heterogeneous images, which is resource extensive for individual labs. In this study, we assess the feasibility of leveraging fine-grained GGS characterization via large-scale web image mining (e.g., from journals, search engines, websites) and self-supervised deep learning. Three types of GGS were assessed-solidified (S-GGS, associated with hypertension-related injury), disappearing (D-GGS, a further end result of the SGGS becoming contiguous with fibrotic interstitium), and obsolescent (O-GGS, nonspecific GGS increasing with aging). We employed the SimSiam network as the baseline method of self-supervised contrastive learning. By deploying our previously developed compound figure separation approach, we provided 30,000 unannotated glomerular images via web image mining to train the SimSiam network. From the results, the GGS fine-grained classification model achieved superior performance compared with baseline methods. The segmentation networks evaluated across six different resolutions
Multi-modal learning (e.g., integrating pathological images with genomic features) tends to improve the accuracy of cancer diagnosis and prognosis as compared to learning with a single modality. However, missing data is a common problem in clinical practice, i.e., not every patient has all modalities available. Most of the previous works directly discarded samples with missing modalities, which might lose information in these data and increase the likelihood of overfitting. In this work, we generalize the multi-modal learning in cancer diagnosis with the capacity of dealing with missing data using histological images and genomic data. Our integrated model can utilize all available data from patients with both complete and partial modalities. The experiments on the public TCGA-GBM and TCGA-LGG datasets show that the data with missing modalities can contribute to multi-modal learning, which improvesthe model performance in grade classification of glioma cancer.
KEYWORDS: Data modeling, Performance modeling, Parallel computing, Image analysis, Instrument modeling, Process modeling, Pathology, Neural networks, Data processing, Skin cancer
Contrastive learning, a recent family of self-supervised learning, leverages pathological image analysis by learning from large-scale unannotated data. However, the state-of-the-art contrastive learning methods (e.g., SimCLR, BYOL) are typically limited by the more expensive computational hardware (with large GPU memory) as compared with traditional supervised learning approaches in achieving large training batch size. Fortunately, recent advances in the machine learning community provide multiple approaches to reduce GPU memory usage, such as (1) activation compressed training, (2) In-place activation, and (3) mixed precision training. Yet, such approaches are currently deployed independently without systematical assessments for contrastive learning. In this work, we applied these memory-efficient approaches into a self-supervised framework. The contribution of this paper is three-fold: (1) We combined previously independent GPU memory-efficient methods with self-supervised learning framework; (2) Our experiments are to maximize the memory efficiency via limited computational resources (a single GPU); (3) The self-supervised learning framework with GPU memory-efficient method allows a single GPU to triple the batch size that typically requires three GPUs. From the experimental results, contrastive learning model with larger batch size leads to higher accuracy enabled by GPU memory-efficient method on single GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.