KEYWORDS: Sensors, Prototyping, Intelligent sensors, Data modeling, Systems modeling, Missiles, Modeling and simulation, Situational awareness sensors, Computer simulations, Analytical research
Understanding the intent of today's enemy necessitates changes in intelligence collection, processing, and dissemination.
Unlike cold war antagonists, today's enemies operate in small, agile, and distributed cells whose tactics do not map well
to established doctrine. This has necessitated a proliferation of advanced sensor and intelligence gathering techniques at
level 0 and level 1 of the Joint Directors of Laboratories fusion model. The challenge is in leveraging modeling and
simulation to transform the vast amounts of level 0 and level 1 data into actionable intelligence at levels 2 and 3 that
include adversarial intent. Currently, warfighters are flooded with information (facts/observables) regarding what the
enemy is presently doing, but provided inadequate explanations of adversarial intent and they cannot simulate 'what-if'
scenarios to increase their predictive situational awareness. The Fused Intent System (FIS) aims to address these
deficiencies by providing an environment that answers 'what' the adversary is doing, 'why' they are doing it, and 'how'
they will react to coalition actions. In this paper, we describe our approach to FIS which includes adversarial 'soft-factors'
such as goals, rationale, and beliefs within a computational model that infers adversarial intent and allows the
insertion of assumptions to be used in conjunction with current battlefield state to perform what-if analysis. Our
approach combines ontological modeling for classification and Bayesian-based abductive reasoning for explanation and
has broad applicability to the operational, training, and commercial gaming domains.
Nowadays, there is an increasing demand for the military to conduct operations that are beyond traditional warfare. In
these operations, analyzing and understanding those who are involved in the situation, how they are going to behave,
and why they behave in certain ways is critical for success. The challenge lies in that behavior does not simply follow
universal/fixed doctrines; it is significantly influenced by soft factors (i.e. cultural factors, societal norms, etc.). In
addition, there is rarely just one isolated enemy; the behaviors and responses of all groups in the region, and the
dynamics of the interaction among them composes an important part of the whole picture. The Dynamic Adversarial
Gaming Algorithm (DAGA) project aims to provide a wargaming environment for automation of simulating dynamics
of geopolitical crisis and eventually be applied to military simulation and training domain, and/or commercial gaming
arena. The focus of DAGA is on modeling communities of interest (COIs), where various individuals, groups, and
organizations as well as their interactions are captured. The framework should provide a context for COIs to interact
with each other and influence others' behaviors. These behaviors must incorporate soft factors by modeling cultural
knowledge. We do so by representing cultural variables and their influence on behavior using probabilistic networks. In
this paper, we describe our COI modeling, the development of cultural networks, the interaction architecture, and a
prototype of DAGA.
KEYWORDS: Analytical research, Computer simulations, Visualization, Warfare, Data modeling, Systems modeling, Visual process modeling, Field emission displays, Roads, Chemical elements
The military planning process utilizes simulation to determine the appropriate course of action (COA) that will achieve a
campaign end state. However, due to the difficulty in developing and generating simulation level COAs, only a few
COAs are simulated. This may have been appropriate for traditional conflicts but the evolution of warfare from attrition
based to effects based strategies, as well as the complexities of 4th generation warfare and asymmetric adversaries have
placed additional demands on military planners and simulation. To keep pace with this dynamic, changing environment,
planners must be able to perform continuous, multiple, "what-if" COA analysis. Scenario management and generation
are critical elements to achieving this goal. An effects based scenario generation research project demonstrated the
feasibility of automated scenario generation techniques which support multiple stove-pipe and emerging broad scope
simulations. This paper will discuss a case study in which the scenario generation capability was employed to support
COA simulations to identify plan effectiveness. The study demonstrated the effectiveness of using multiple simulation
runs to evaluate the effectiveness of alternate COAs in achieving the overall campaign (metrics-based) objectives. The
paper will discuss how scenario generation technology can be employed to allow military commanders and mission
planning staff to understand the impact of command decisions on the battlespace of tomorrow.
As General John P. Jumper, Air Force Chief of Staff, noted the bulk of an Air Operations Center Air Tasking Order cycle is spent gathering information from different stovepipe intelligence assets, then manually evaluating the results and planning implications. This time consuming process is an obstacle that inhibits the real-time battlespace awareness needed by commanders to dynamically task assets to address time critical targets and help the Air Force meet its goal of “striking mobile and emerging targets in single digit minutes”. This paper describes how research performed for the Dynamic Intelligence Anticipation, Prioritization, and Exploitation System (DIAPES) supports this goal by leveraging advances in ontological modeling, intelligence data integration; artificial intelligence; and visualization. DIAPES applies automated analysis and visualization to an integrated ontology that specifies the relationships among intelligence and planning products and battlespace execution assets. This research seeks to enable commanders and analysts to perform 'what-if' scenarios to judge tradeoffs and determine the potential propagation effects that retasking assets to address time critical targets have throughout battlespace plans and participants.
This paper will evaluate the feasibility of constructing a system to support intelligence analysts engaged in counter-terrorism. It will discuss the use of emerging techniques to evaluate a large-scale threat data repository (or Infosphere) and comparing analyst developed models to identify and discover potential threat-related activity with a uncertainty metric used to evaluate the threat. This system will also employ the use of psychological (or intent) modeling to incorporate combatant (i.e. terrorist) beliefs and intent. The paper will explore the feasibility of constructing a hetero-hierarchical (a hierarchy of more than one kind or type characterized by loose connection/feedback among elements of the hierarchy) agent based framework or "family of agents" to support "evidence retrieval" defined as combing, or searching the threat data repository and returning information with an uncertainty metric. The counter-terrorism threat prediction architecture will be guided by a series of models, constructed to represent threat operational objectives, potential targets, or terrorist objectives. The approach would compare model representations against information retrieved by the agent family to isolate or identify patterns that match within reasonable measures of proximity. The central areas of discussion will be the construction of an agent framework to search the available threat related information repository, evaluation of results against models that will represent the cultural foundations, mindset, sociology and emotional drive of typical threat combatants (i.e. the mind and objectives of a terrorist), and the development of evaluation techniques to compare result sets with the models representing threat behavior and threat targets. The applicability of concepts surrounding Modeling Field Theory (MFT) will be discussed as the basis of this research into development of proximity measures between the models and result sets and to provide feedback in support of model adaptation (learning). The increasingly complex demands facing analysts evaluating activity threatening to the security of the United States make the family of agent-based data collection (fusion) a promising area. This paper will discuss a system to support the collection and evaluation of potential threat activity as well as an approach fro presentation of the information.
KEYWORDS: Data modeling, Computer simulations, Systems modeling, Computer architecture, Associative arrays, Data conversion, Chemical elements, Standards development, Human-machine interfaces, Data integration
This paper will discuss automated scenario generation (Sgen) techniques to support the development of simulation scenarios. Current techniques for scenario generation are extremely labor intensive, often requiring manual adjustments to data from numerous sources to support increasingly complex simulations. Due to time constraints this process often prevents the simulation of a large numbers of data sets and the preferred level of “what if analysis”. The simulation demands of future mission planning approaches, like Effects Based Operations (EBO), require the rapid development of simulation inputs and multiple simulation runs for those approaches to be effective. This paper will discuss an innovative approach to the automated creation of complete scenarios for mission planning simulation. We will discuss the results of our successful Phase I SBIR effort that validated our approach to scenario generation and refined how scenario generation technology can be directly applied to the types of problems facing EBO and mission planning. The current stovepipe architecture marries a scenario creation capability with each of the simulation tools. The EBO-Scenario generation toolset breaks that connection through an approach centered on a robust data model and the ability to tie mission-planning tools and data resources directly to an open Course Of Action (COA) analysis framework supporting a number of simulation tools. In this approach data sources are accessed through XML tools, proprietary DB structures or legacy tools using SQL and stored as an instance of Sgen Meta Data. The Sgen Meta Data can be mapped to a wide range of simulation tools using a Meta Data to simulation tools mapping editor that generates an XSLT template describing the required data translation. Once the mapping is created, Sgen will automatically convert the Meta Data instance, using XSLT, to the formats required by specific simulation tools. The research results presented in this paper will show how the complex demands of mission planning can be met with current simulation tools and technology.
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of “what if” analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the “family” would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent “publishes” its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
This paper discusses the research currently in progress to develop the Conceptual Federation Object Model Design Tool. The objective of the Conceptual FOM (C-FOM) Design Tool effort is to provide domain and subject matter experts, such as scenario developers, with automated support for understanding and utilizing available HLA simulation and other simulation assets during HLA Federation development. The C-FOM Design Tool will import Simulation Object Models from HLA reuse repositories, such as the MSSR, to populate the domain space that will contain all the objects and their supported interactions. In addition, the C-FOM tool will support the conversion of non-HLA legacy models into HLA- compliant models by applying proven abstraction techniques against the legacy models. Domain experts will be able to build scenarios based on the domain objects and interactions in both a text and graphical form and export a minimal FOM. The ability for domain and subject matter experts to effectively access HLA and non-HLA assets is critical to the long-term acceptance of the HLA initiative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.