Presentation + Paper
7 June 2024 Towards neuro-symbolic reinforcement learning for trustworthy human-autonomy teaming
Priti Gurung, Jiang Li, Danda B. Rawat
Author Affiliations +
Abstract
Artificial Intelligence (AI) has shown a tremendous impact on civilian and military applications. However, traditional AI will remain inadequate because of issues such as explicit and implicit biases and explainability for operating independently in dynamic and complex environments for the foreseeable future. For human-autonomy teaming (HAT), trustworthy AI is crucial since machines/autonomy and humans work in collaboration for shared learning and joint reasoning for a given mission with combat speed and high accuracy, trust, and assurance. In this paper, we present a brief survey of recent advances, some key challenges, and future research directions for neuro-symbolic-reinforcement-learning-enabled trustworthy HAT.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Priti Gurung, Jiang Li, and Danda B. Rawat "Towards neuro-symbolic reinforcement learning for trustworthy human-autonomy teaming", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 1305407 (7 June 2024); https://doi.org/10.1117/12.3014232
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Artificial intelligence

Neural networks

Transparency

Safety

Decision making

Reliability

Back to Top