Distributed battle management of a group of autonomous agents e.g., unmanned air systems (UAS) facing a highly capable adversary, requires controlling a group of semi-isolated small teams, operating under extensive uncertainty about the enemy situation and also about the status and plans of some of its own agents. Barnstorm Research developed Trust, Refrain and Veto (TReVe), which allows agents to maintain higher situational awareness of teammates when in a denied communications environment. Barnstorm Research has shown that TReVe increases mission success via simulation when onboard RAIDER/FACE-enabled UAS in a denied communications environment.
Machine Reasoning and Intelligence is usually done in a vacuum, without consultation of the ultimate decision-maker. The late consideration of the human cognitive process causes some major problems in the use of automated systems to provide reliable and actionable information that users can trust and depend to make the best Course-of-Action (COA). On the other hand, if automated systems are created exclusively based on human cognition, then there is a danger of developing systems that don’t push the barrier of technology and are mainly done for the comfort level of selected subject matter experts (SMEs). Our approach to combining human and machine processes (CHAMP) is based on the notion of developing optimal strategies for where, when, how, and which human intelligence should be injected within a machine reasoning and intelligence process. This combination is based on the criteria of improving the quality of the output of the automated process while maintaining the required computational efficiency for a COA to be actuated in timely fashion. This research addresses the following problem areas:
•
Providing consistency within a mission: Injection of human reasoning and intelligence within the reliability and temporal needs of a mission to attain situational awareness, impact assessment, and COA development.
•
Supporting the incorporation of data that is uncertain, incomplete, imprecise and contradictory (UIIC): Development of mathematical models to suggest the insertion of a cognitive process within a machine reasoning and intelligent system so as to minimize UIIC concerns.
•
Developing systems that include humans in the loop whose performance can be analyzed and understood to provide feedback to the sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.