KEYWORDS: Lung, Computed tomography, 3D modeling, 3D image processing, Feature extraction, Education and training, Matrices, Data modeling, Convolution, Performance modeling
As the population ages, early diagnosis and treatment of lung diseases become increasingly important. Accurate assessment of aging-related changes in lung CT images is crucial for the prevention and treatment of related diseases. Traditional methods for lung aging assessment from CT images are time-consuming, subjective, and heavily reliant on the clinical experience of doctors. To address these issues, this paper proposes a lung aging assessment method with 3D-CA Net. The feature extraction part of the proposed network consists of four main 3D Convolutional and Composite Multidimensional Attention Modules. By introducing the Composite Multidimensional Attention Module, the advantages of spatial attention and self-attention are both utilized. Additionally, an improved E-cross-entropy loss function is employed to reduce overfitting and enhance generalization. Experimental results demonstrate that the 3D-CA Net significantly outperforms existing methods in terms of accuracy, macro-averaged precision, macro-averaged recall and macro-averaged F1 score. This work provides a comprehensive solution for lung CT image aging assessment and offers insights for future advancements in medical imaging analysis.
Gastric cancer is a serious health threat and pathological images are an important criterion in its diagnosis. These images can help doctors accurately identify cancerous regions and provide important evidence for clinical decision-making. Thanks to the remarkable achievements of deep learning technology in the field of image processing, an increasing number of superior image segmentation models have emerged. The Swin-Unet model has achieved great success in the field of image segmentation. However, when applied to the image segmentation of gastric cancer pathological section data, the segmentation boundary appears jagged. We have put forth two potential solutions. Initially, we devised an attention connection module to supplant the skip connections within the model, thereby enhancing the model’s predictive precision. Subsequently, we engineered a prognostic processing unit that inputs the model’s predictive outcomes and employs a Conditional Random Field (CRF) for further predictive computations. The enhanced model increases the DSC by 2% and decreases the HD by 17%. Additionally, the issue of jagged boundaries in prediction results has been better optimized. We conducted comparative and ablation experiments, and the results showed that our improved method increased the accuracy of the model’s predictions and reduced the jaggedness of the results at the boundary.
Esophageal cancer is one of the leading causes of mortality worldwide. Early diagnostics are imperative of improving the chances of correct treatment and survival. Pathology examination requires time consuming scanning while often leads to a disagreement among pathologists. The computer-aided diagnosis systems are critical to improve the diagnostic efficiency and reduce the subjectivity and error of human. In this paper, a deep learning–based approach is proposed to classify the H&E stained histopathological images of esophageal cancer. The histopathological images are firstly normalized to correct the color variations during slide preparation. Then one image patch is cut into five pieces and gets five corresponded features via convolutional neural networks (CNNs). Subsequently, the five features of one image patch are fed into the Long Short-Term Memory (LSTM) model for further feature extraction and integrated as one for the last classification. The experiment results have demonstrated the proposed approach is effective to classify the esophageal histopathological images with the accuracy of 84.5% which outperforming 10 percent than GoogleNet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.