Cervical cancer is the second most common malignancy in women, while is prevented through diagnosing and treating cervical precancerous lesions. Clinically, histopathological image analysis is recognized as the gold standard for diagnosis. However, the diagnosis of cervical precancerous lesions is challenging due to the massive size of whole slide images and subjective grading without precise quantification criteria. Most existing computer aided diagnosis approaches are patches-based, first learning patch-wise features and then aggregating these local features to infer the final prediction. Cropping pathology images into patches restrains the contextual information available to those networks, causing failing to learn clinically relevant structural representations. To address the above problems, this paper proposes a novel weakly supervised learning method called general attention network (GANet) for grading cervical precancerous lesions. A bag-of-instances pattern is introduced to overcome the limitation of the high resolution of whole slide images. Moreover, based on two transformer blocks, the proposed model is able to encode the dependencies among bags and instances that are beneficial to capture much more informative contexts, and thus produce more discriminative WSI descriptors. Finally, extensive experiments are conducted on a public cervical histology dataset and the results show that GANet achieves the state-of-the-art performance.
Recent advances in functional magnetic resonance imaging (fMRI) techniques and machine learning have shown that it is possible to decode distinct brain state from complex brain activities, which have raised widespread concern. Deep learning is a popular method of machine learning and has achieved remarkable results in the field of speech recognition, image recognition and so on. However, there are many challenges in medical image analysis when using deep learning. Aiming to solve the difficulty of subject-transfer decoding, high dimensional feature extraction and slow computation, here we proposed a deep convolutional decoding (DCD) model. First, an architecture of deep convolutional network became a subject-transfer feature extractor on task-fMRI (tfMRI) data. Then, the high dimensional abstract feature was used to identify certain brain cognitive state. The experimental results show that our proposed method can achieve higher decoding accuracy of brain state across different subjects compared with traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.