The rich context information and multiscale ground information in remote sensing images are crucial to improving the semantic segmentation accuracy. Therefore, we propose a remote sensing image semantic segmentation method that integrates multilevel spatial channel attention and multi-scale dilated convolution, effectively addressing the issue of poor segmentation performance of small target objects in remote sensing images. This method builds a multilevel characteristic fusion structure, combining deep-level semantic characteristics with the details of the shallow levels to generate multiscale feature diagrams. Then, we introduce the dilated convolution of the series combination in each layer of the atrous spatial pyramid pooling structure to reduce the loss of small target information. Finally, using convolutional conditional random field to describe the context information on the space and edges to improve the model’s ability to extract details. We prove the effectiveness of the model on the three public datasets. From the quantitative point of view, we mainly evaluate the four indicators of the model’s F1 score, overall accuracy (OA), Intersection over Union (IoU), and Mean Intersection over Union (MIoU). On GID dataset, F1 score, OA, and MIoU reach 87.27, 87.80, and 77.70, respectively, superior to most mainstream semantic segmentation networks.
An image caption generation model with adaptive attention mechanism is proposed for dealing with the weakness of the image description model by the local image features. Under the framework of encoder and decoder architecture, the local and global features of images are extracted by using inception V3 and VGG19 network models at the encoder. Since the adaptive attention mechanism proposed in this paper can automatically identify and acquire the importance of local and global image information, the decoder can generate sentences describing the image more intuitively and accurately. The proposed model is trained and tested on Microsoft COCO dataset. The experimental results show that the proposed method can extract more abundant and complete information from the image and generate more accurate sentences, compared with the image caption model based on local features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.