PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume MIPPR 2023: Parallel Processing of Images and Optimization Techniques; and Medical Imaging, 1308701 (2024) https://doi.org/10.1117/12.3029664
This PDF file contains the front matter associated with SPIE Proceedings Volume 13087, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2023: Parallel Processing of Images and Optimization Techniques; and Medical Imaging, 1308704 (2024) https://doi.org/10.1117/12.2691200
Developing a data-driven predictive biomarker with capacity to classify and characterize brain-based features benefits clinical diagnoses and treatment outcomes for smokers. Although deep learning (DL) has been used to predict biomarkers in massive medical imaging fields owing to its strong learning capability from unstructured and unlabeled data, no study has yet used such tactic with structural MRI data to identify or represent smoking-related brain characteristics. For the first time, this work presents a DL model (named contextual attention U-net, CAT) to identify brain characteristics of smokers from T1-weighted structural MRI data. CAT uses U-net as a backbone, in which a spatial attention module is embedded between the third and fourth downsampling sections, to overcome fails of convolution by drawing long-range contextual interactions, while four lightweight channel attention modules are catenated into skip connections, to improve the feature directivity and preserve local details. Experimental results demonstrate that CAT is able to not only more robustly uncover gray/white matter features of smokers, but also win better classification performance with regard to accuracy, precision, recall, and F-score measures, in comparison to stateof- the-art models. If compared with three classical machine learning-based algorithms, the average lift ratios (%) of CAT are 25.02, 25.72, 25.60, and 25.62 regarding each term. When compared with seven DL-based approaches, they are 18.46, 18.44, 18.91, and 19.06 (%), respectively. Therefore, CAT is able to recognize faint differences of gray/white matter between smokers and nonsmokers, and it has potentials to solve difficulties of biomarker estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zhengyang Guo, Zengbiao Yang, Xuanyu Xiang, Yihua Tan
Proceedings Volume MIPPR 2023: Parallel Processing of Images and Optimization Techniques; and Medical Imaging, 1308706 (2024) https://doi.org/10.1117/12.3004379
Morphological examination of bone marrow cells is crucial for diagnosing blood diseases. However, manual classification of bone marrow cells is time-consuming and subjective. Therefore, it is necessary to develop an autoclassification method for bone marrow cells. Although deep learning methods are commonly used for cell classification like Resnet50, they don’t take advantage of the features of bone marrow cells such as the shape features of cells. However, the shape of cell and nucleus plays a significant role in distinguishing between different cell types. In this paper, we proposed a Shape and Texture Feature Blending Network (STFB-Net) for bone marrow cells classification based on auxiliary learning. We used ResNext50 as the backbone network for STFB-Net due to its exceptional ability to extract texture features from cells. In addition, we proposed the Shape Feature Extraction Module (SFEM) to enhance the backbone network's ability for capturing shape features. SFEM shares a part of parameters with the backbone network. SFEM extracts features at multiple scales and up-samples them, then fuses multi-scale features to predict the shape of cells. We performed experiments on two bone marrow cell datasets. The results show that the proposed STFB-Net can effectively extract texture and shape features, which brings better performance than other cell classification methods. By using the Grad-CAM method to visualize the features extracted by STFB-Net, we proved the reliability and the effectiveness of the STFB-Net in extracting cell shape features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume MIPPR 2023: Parallel Processing of Images and Optimization Techniques; and Medical Imaging, 1308707 (2024) https://doi.org/10.1117/12.3005340
In the MIS (minimally invasive surgery), precise measurement and mastery of human organs is very important, even a slight wobble of instrument can cause a great deal of error. Digital 3D reconstruction technology can help doctor to confirm the real distance. Considering depth data is difficult to acquire and annotate on a large scale, especially in the human body scene, so many researches focus on self-supervised network. Most of them are based on a widely adopted hypothesis that Image brightness remains constant in adjacent frames, which cannot be satisfied in the body sense, with unstable lighting conditions in the narrow scale and complicated senses. To solve this problem, this paper proposes a high-precision depth reconstruction method for endoscopic surgical scenarios based on AFNet. Firstly, to overcome the limitations of the constant illumination assumption, we extract local features with illumination invariance and introduce quantization calculations for illumination-invariant feature descriptors in the loss function, to reduce the impact of illumination changes during the supervised process. Secondly, leveraging the synchronous movement of the light source and lens in the endoscope, we establish an association model between the brightness variation and depth prediction. This helps the network grasp the image context and smooth the depth better. Experiments show that our method improves the accuracy of depth estimation in endoscopic environments compared to baseline methods. The reconstruction effects on the ENDOVIS datasets of laparoscopic endoscopy and the ENDOSLAM datasets of gastrointestinal endoscopy are significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.