Nowadays, the availability of different types of biomedical digital data offers many opportunities to investigate the relationships between the different modalities and thus develop a more comprehensive understanding of complex diseases such as cancer. In this paper, we propose a multi-modal model, called deep modality association learning (DMAL), that maps immune cell sequencing patterns to morphological tissue features of whole slide imageds (WSIs) in an embedding space. Useful information is extracted from T-cell receptor (TCR) sequences to guide the training process. DMAL maps the TCR features to the morphology features in histopathology images, which in turn enables the model to learn the association features between the two modalities. The discrimination power of the WSI-TCR association features has been assessed by classifying samples with different cancer subtypes. The conducted experiments have shown that DMAL generates more discriminative features compared to features obtained from single-modal data. In addition, DMAL has been utilized to predict TCR information from histopathology image representations without the need to have the actual TCR sequencing data.
PurposeThe latest generation of scanners can digitize histopathology glass slides for computerized image analysis. These images contain valuable information for diagnostic and prognostic purposes. Consequently, the availability of high digital magnifications like 20 × and 40 × is commonly expected in scanning the slides. Thus, the image acquisition typically generates gigapixel high-resolution images, times as large as 100,000 × 100,000 pixels. Naturally, the storage and processing of such huge files may be subject to severe computational bottlenecks. As a result, the need for techniques that can operate on lower magnification levels but produce results on par with outcomes for high magnification levels is becoming urgent.ApproachOver the past decade, the digital solution of enhancing images resolution has been addressed by the concept of super resolution (SR). In addition, deep learning has offered state-of-the-art results for increasing the image resolution after acquisition. In this study, multiple deep learning networks designed for image SR are trained and assessed for the histopathology domain.ResultsWe report quantitative and qualitative comparisons of the results using publicly available cancer images to shed light on the benefits and challenges of deep learning for extrapolating image resolution in histopathology. Three pathologists evaluated the results to assess the quality and diagnostic value of generated SR images.ConclusionsPixel-level information, including structures and textures in histopathology images, are learnable by deep networks; hence improving the resolution quantity of scanned slides is possible by training appropriate networks. Different SR networks may perform best for various cancer sites and subtypes.
Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a “master gold” based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.