KEYWORDS: Cognitive modeling, Systems modeling, Unmanned aerial vehicles, Control systems, Ultraviolet radiation, Data modeling, Performance modeling, Detection and tracking algorithms, Telecommunications, Intelligence systems
Trends in combat technology research point to an increasing role for uninhabited vehicles in modern warfare tactics. To support increased span of control over these vehicles human responsibilities need to be transformed from tedious, error-prone and cognition intensive operations into tasks that are more supervisory and manageable, even under intensely stressful conditions. The goal is to move away from only supporting human command of low-level system functions to intention-level human-system dialogue about the operator's tasks and situation.
A critical element of this process is developing the means to identify when human operators need automated assistance and to identify what assistance they need. Toward this goal, we are developing an unmanned vehicle operator task recognition system that combines work in human behavior modeling and Bayesian plan recognition. Traditionally, human behavior models have been considered generative, meaning they describe all possible valid behaviors. Basing behavior recognition on models designed for behavior generation can offers advantages in improved model fidelity and reuse. It is not clear, however, how to reconcile the structural differences between behavior recognition and behavior modeling approaches.
Our current work demonstrates that by pairing a cognitive psychology derived human behavior modeling approach, GOMS, with a Bayesian plan recognition engine, ASPRN, we can translate a behavior generation model into a recognition model. We will discuss the implications for using human performance models in this manner as well as suggest how this kind of modeling may be used to support the real-time control of multiple, uninhabited battlefield vehicles and other semi-autonomous systems.
Trends in combat technology research point to an increasing role for uninhabited vehicles and other robotic elements in modern warfare tactics. However, real-time control of multiple uninhabited battlefield robots and other semi-autonomous systems, in diverse fields of operation, is a difficult problem for modern warfighters that, while identified, has not been adequately addressed.
Soar Technology is applying software agent technology to simplify demands on the human operator. Our goal is to build intelligent systems capable of finding the best balance of control between the human and autonomous system capabilities. We are developing an Intelligent Control Framework (ICF) from which to create agent-based systems that are able to dynamically delegate responsibilities across multiple robotic assets and the human operator. This paper describes proposed changes to our ICF architecture based on principles of human-machine teamwork derived from collaborative discourse theory. We outline the principles and the new architecture, and give examples of the benefits that can be realized from our approach.
KEYWORDS: Data modeling, Cognitive modeling, Data fusion, Visualization, Sensors, Analytical research, Intelligence systems, Missiles, Systems modeling, Geographic information systems
In order to wage successful campaigns, the next generation of intelligence analysts and battle commanders will need to assimilate an enormous amount of information that will come from a wide range of heterogeneous data sources. Complicating this problem further is the fact that warfighters need to be able to manage information in an environment of rapidly changing events and priorities. The consequence of not addressing this problem, or not addressing it as effectively as hostile forces do, is a potential loss of assets, personnel, or tactical advantage.
To design effective information displays there needs to be an extensible framework that models the warfighters context including characteristics of the information sources being displayed, the current Intelligence Surveillance Reconnaissance (ISR) picture or Common Operating Picture (COP), the warfighters current state and task, and the state of the information display. BINAH (Battlespace Information and Notification through Adaptive Heuristics) uses an agent-based modeling approach coupled with research into temporal and spatial reasoning, novel display management techniques, and development of a formal high-level language for describing model-based information configuration.
The result is an information configuration pipeline designed to provide perceptual and cognitive analysis support to Air Force analysts engaged in Time-Critical Targeting target nomination. It has been integrated with the Air Force Research Laboratory's (AFRL) XML-based Joint Battlespace Infosphere (JBI) combat information management system and combines JBI delivered sensor data with a local user model and display strategies to configure a geospatial information display. The BINAH framework will provide a firm grounding for developing new C4ISR displays that maximize the ability of warfighters to assimilate the information presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.