Paper
15 June 2022 A Roberta-Seq2Seq based model for Chinese text abstractive summarization
Junjie Sun, Xia Hou
Author Affiliations +
Proceedings Volume 12285, International Conference on Advanced Algorithms and Neural Networks (AANN 2022); 1228516 (2022) https://doi.org/10.1117/12.2637174
Event: International Conference on Advanced Algorithms and Neural Networks (AANN 2022), 2022, Zhuhai, China
Abstract
In order to overcome the limitations of existing generative text summary generation algorithms in Chinese text and improve feature extraction ability of traditional deep learning models, a generative Chinese text summary generation model based on RoBERTa-Seq2Seq is proposed. The pre-training model RoBERTa is used to learn the dynamic meaning of current words in a specific context, so as to improve the semantic representation of words. Based on the Seq2Seq model, Luong Attention is used to further enhance global information. The experimental results show that our model’s ROUGE score is higher than some other traditional Seq2Seq models, which indicates that our RoBERTa-Seq2Seq based model can effectively improve the semantic representation ability of the generated summary in Chinese text and improve feature extraction ability of the traditional deep learning model.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Junjie Sun and Xia Hou "A Roberta-Seq2Seq based model for Chinese text abstractive summarization", Proc. SPIE 12285, International Conference on Advanced Algorithms and Neural Networks (AANN 2022), 1228516 (15 June 2022); https://doi.org/10.1117/12.2637174
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Computer programming

Web 2.0 technologies

Data modeling

Pollution

Performance modeling

Computing systems

Feature extraction

Back to Top