Paper
15 October 2021 A hierarchical reinforcement learning method on multi UCAV air combat
Yabin Wang, Tianshu Jiang, Youjiang Li
Author Affiliations +
Proceedings Volume 11933, 2021 International Conference on Neural Networks, Information and Communication Engineering; 119330K (2021) https://doi.org/10.1117/12.2615268
Event: 2021 International Conference on Neural Networks, Information and Communication Engineering, 2021, Qingdao, China
Abstract
In the recent years, the unmanned combat aerial vehicle (UCAV) techniques is a hot topic of research. Many researches are studying how to use to fulfill missions and defend enemies based on simulation platforms. Different AI agents have been constructed to control virtual UCAVs to perform tasks on simulation platforms. Rule based AI heavily depends on human knowledge and lacks of flexibility. They cannot adapt to the changing environment. Reinforcement learning based AI has advantages over rule based AI as its depend less on human knowledge. In this paper a hierarchical reinforcement learning method is proposed on Multi-UCAV air combat based on simulation platform. The experiment results showed that the hierarchical approach can outperform state-of-the-art air combat method.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yabin Wang, Tianshu Jiang, and Youjiang Li "A hierarchical reinforcement learning method on multi UCAV air combat", Proc. SPIE 11933, 2021 International Conference on Neural Networks, Information and Communication Engineering, 119330K (15 October 2021); https://doi.org/10.1117/12.2615268
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Unmanned combat air vehicles

Radar

Unmanned aerial vehicles

Artificial intelligence

Transformers

Network architectures

Defense and security

Back to Top