Presentation + Paper
7 June 2024 Bias, explainability, transparency, and trust for AI-enabled military systems
Author Affiliations +
Abstract
As artificial intelligence (AI) becomes a prevalent requirement in military systems, assuring the security and reliability of AI-enabled technologies becomes paramount. This paper explores assurance and security mechanisms imperative for AI-enabled military systems, laying specific emphasis on what guidelines and documentation have been established to date as well as understanding and characterizing bias, explainability, transparency, and trust to help remove the mystery behind AI systems. AI introduces transformative capabilities to military operations; however, its deployment is often hindered by challenges regarding security, assurance, and ethical concerns. Secure and reliable AI systems are crucial in a military context, where decision-making needs to be rapid, accurate, and trustworthy. Addressing bias, ensuring explainability, fostering transparency, and understanding trust are four important factors crucial for the successful integration and acceptance of AI within military applications. Bias in AI systems can inadvertently lead to unfair or unethical outcomes, potentially endangering lives, and compromising missions. Understanding and characterizing bias in AI-enabled military systems is crucial for developing fair and impartial algorithms. The paper delves into methodologies and frameworks that assist in identifying, measuring, and mitigating bias, thereby promoting the development and deployment of ethical and fair AI in military operations. Explainability in AI refers to the ability to describe the internal mechanisms of a system or the relationships between input features and predictions in an understandable manner to humans. In the context of military applications, explainability is not merely desirable but often legally and ethically required. The paper explores techniques and approaches to enhance the explainability of AI systems, thereby instilling confidence in their use and facilitating their acceptance among military personnel and policymakers alike. Transparency, the third pillar, involves making the AI's decision-making processes and data handling practices clear to the end-users and stakeholders. In military settings, where accountability and trust are foundational. Transparency in AI systems is non-negotiable. The paper sheds light on mechanisms that foster transparency and trust, providing insights into how these can be integrated into the design and deployment phases of AI-enabled military systems, ensuring that they align with legal standards and ethical norms. Finally, the paper highlights the measures that governments have put in place to maintain ethical use of AI both in the commercial and military realm. The importance of creating an ecosystem where assurance and security are not afterthoughts but are integrated into the entire life cycle of AI-enabled military systems is emphasized. From the design and development phases through deployment and operational stages, every step needs to be imbued with practices and checks that ensure that the systems are unbiased, explainable, transparent, and trustworthy.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Teresa Pace and Bryan Ranes "Bias, explainability, transparency, and trust for AI-enabled military systems", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 1305406 (7 June 2024); https://doi.org/10.1117/12.3012949
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

Transparency

Data modeling

Evolutionary algorithms

Defense and security

Defense technologies

Detection and tracking algorithms

Back to Top