PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Exploratory analysis examines the consequences of uncertainty--not merely by standard sensitivity methods, but more comprehensively. It is particularly useful for gaining a broad understanding of a problem domain before dipping into details. Although exploratory analysis can be accomplished with models of many types, it is facilitated by multiresolution, multiperspective modeling (MRMPM) structures. Moreover, a knowledge of related design principles facilitates the characterization of more normal models in terms that permit exploratory analysis. This paper describes the connections and notes that, with current and emerging personal computer tools, MRMPM methods are becoming practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have used detailed, entity-level models to simulate the effects of long-range precision fires employed against an invader. Results show that these fires are much less effective against dispersed formations marching through mixed terrain than against dense formations in open terrain. We expected some loss of effectiveness, but not as much as observed. So we built a low resolution model (PEM, or PGM Effectiveness Model) and calibrated it to the results of the detailed simulation. PEM explains analytically how various situational and tactical factors, which are usually treated only in complex models, can influence the effectiveness of these fires. The variables we consider are characteristics of the C4ISR system (e.g., time of last update), missile and weapon characteristics (e.g., footprint), maneuver pattern of the advancing column (e.g., vehicle spacing), and aggregate terrain features (e.g., open versus mixed terrain).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation modeling of complex systems is receiving increasing research attention over the past years. In this paper, we discuss the basic concepts involved in multi- resolution simulation modeling of complex stochastic systems. We argue that, in many cases, using the average over all available high-resolution simulation results as the input to subsequent low-resolution modules is inappropriate and may lead to erroneous final results. Instead high- resolution output data should be classified into groups that match underlying patterns or features of the system behavior before sensing group averages to the low-resolution modules. We propose high-dimensional data clustering as a key interfacing component between simulation modules with different resolutions and use unsupervised learning schemes to recover the patterns for the high-resolution simulation results. We give some examples to demonstrate our proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Visualization for Modeling and Simulation
In the past, visualization systems have been constructed to work with a single application or venue. By dismissing the notion, a visualizer can be created which is flexible enough to be configured to the user's viewing requirements just prior to execution. Going one step further, requirements that change during execution can also be addressed by this visualizer as the analysis of a simulation progresses. Allowing the user to reconfigure their visualization style, mode and information provides a more flexible method of optical analysis than previously possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of metaphor in programming can be a powerful aid to the programmer, inasmuch as it provides concrete properties to abstract ideas. In turn, these concrete properties can aid recognition of, and reasoning about, programming problems. Another potential benefit of the use of metaphor in programming is the improvement of mental retention of facts and solutions to programming problems. Traditionally, programs have been produced in a textual medium. However, a textual medium may be inferior to a 3D medium in the development and use of metaphor, as the concrete properties that metaphors provide are real-world phenomena, which are naturally 3D. An example of the use of 3D metaphors in programming was created. This consisted of a mock operating system task scheduler, along with some associated hardware devices, developed in a VRML environment using VRML PROTO nodes. These nodes were designed as objects based on real- world metaphors. The issues, problems, and novelties involved in programming in this manner were explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses a fundamental, easy but powerful mechanism of Virtual Reality Simulation system on World Wide Web. The basic idea is to use Virtual Reality Modeling Language (VRML) to build the virtual world, and design a specific simulator to complete the common simulation work and drive the VR animation. According to the achievable mathematics and animation function, two types of this VR Simulation system are founded. The first one is to use VRMLScript or JavaScript to code the specific simulator. This mechanism really can be realized, however, the mathematical operations and the simulation model scale are limited. The other is to apply Java for coding the simulator, then use HTML to combine the VR scene and the simulator in the same WebPage, which can harmonize the VR animation running according to the simulation logic. Because Java is fully mathematics functioned and the Java code modules are entirely reusable, this VR simulation system, which is mainly recommended, can be easily realized on desktop PC and meet the basic interactive requirements of VR Technology without any extra hardware. A VR M/M/1/k Queuing simulation system is given to explain the mechanism. Finally, the overall Integrated Development Environment for this VR simulation system is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the Joint Communication Simulator (JCS) system design as a case study of both the conceptual and implementation applicability of High Level Architecture (HLA) in this difficult context. Specific technical topics to be covered include an overview of JCS requirements, an overview of the modeling concept and system architecture in terms of the HLA, a definition of the subset of Run Time Infrastructure (RTI) functionality and HLA interface specification applied, and an overview of the RTI subset implementation. In addition, it addresses the political questions of HLA compliance, the openness of RTI designs and implementation, and the issue of RTI certification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the research currently in progress to develop the Conceptual Federation Object Model Design Tool. The objective of the Conceptual FOM (C-FOM) Design Tool effort is to provide domain and subject matter experts, such as scenario developers, with automated support for understanding and utilizing available HLA simulation and other simulation assets during HLA Federation development. The C-FOM Design Tool will import Simulation Object Models from HLA reuse repositories, such as the MSSR, to populate the domain space that will contain all the objects and their supported interactions. In addition, the C-FOM tool will support the conversion of non-HLA legacy models into HLA- compliant models by applying proven abstraction techniques against the legacy models. Domain experts will be able to build scenarios based on the domain objects and interactions in both a text and graphical form and export a minimal FOM. The ability for domain and subject matter experts to effectively access HLA and non-HLA assets is critical to the long-term acceptance of the HLA initiative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on results from an ongoing project to develop methods for representing and managing multiple, concurrent levels of modeling detail and enabling high performance computing, namely parallel processing, within object-based simulation frameworks such as HLA. We present here the interface structure and runtime support service concept for using parallel arrays for high performance computing within distributed object-based simulation frameworks. The approach employs a distributed array descriptor, which can be a basis for extending the HLA standard to provide support for efficiently sharing very large data arrays or sub-arrays among federates. The goal is to reduce communications overhead and thereby improve simulation performance involving C4ISR models that require, for example, interpolation and extrapolation of large data sets, such as those that naturally occur for overlay, coupling, and fusion of phenomenology information in multi- sensor networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The object-oriented approach known as heterogeneous behavior multimodeling has been developed, used, and reported elsewhere, to facilitate creation, modification, sharing, and reuse of object-oriented models and the simulations created from those models. The digital object extends multimodeling so that digital objects can be shared and combined in ways that ordinary multimodels cannot. We describe an abstract base multimodel and several derived instantiated multimodel types. We also describe a transformation which takes a digital object to a simulation program. We give formal definitions of multimodeling, digital object, and the transformation, then from these definitions prove correctness of execution sequencing of simulations created by applying the transformation to digital objects. Closure under coupling of digital objects follows as a corollary, subject to an assumption regarding experimental frame. We then construct an abstract base architecture for manufacture, flow, and persistence of digital objects. From the base architecture we derive and instantiate a suite of architectures, each targeted at a distinct set of requirements: one to operate locally, another with internet protocols, a third with web protocols, and a fourth to allow digital objects to interoperate with other kinds of simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Virtual Targets Center is a strategic alliance between STRICOM's Targets Management Office and AMCOM's Systems Simulation and Development Directorate. The Virtual Target Center reduces duplication of effort by making DoD owned geometry models available for reutilization. The mission of the Virtual Targets Center is to support the modeling and simulation community by collecting, creating and distributing geometry models in multiple formats applicable to a wide range of simulation activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present Baobab, a software architecture and methodology for distributed simulation and interaction. Using pervasive componentization throughout the system, Baobab provides a stable but extensible platform for the development of content-rich interactive simulation. Entities in the environment are simulated using dynamically loadable simulation modules (shared libraries, java byte codes, scripts, etc...). We provide an elegant API to the simulation module developer, allowing modules to interact with entities which they have never encountered before. This approach allows domain experts to develop simulation modules based on their expertise with limited knowledge of the inner-workings of a VE system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The widely-used Air Force hierarchy of models and simulations is generally depicted as a four-level pyramid; ranging from Engineering/Component Level up to Theater/Campaign Level. While it does present a concise picture of the scope of military models and simulations, it gives the impression that there is a smooth and natural transition from one level to the next. That is not the case. In fact, there is a great variance in degree of complexity from one level to the next. This paper looks at the state- of-practice in modeling and simulation in the context of this hierarchy; and in particular, at traditional and revolutionary techniques involving inter-level relationships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we report on state-space system identification approaches to dynamic behavioral abstraction of military simulation models. Two stochastic simulation models were identified under a variety of scenarios. The `Attrition Simulation' is a model of two opposing forces with multiple weapon system types. The `Mission Simulation' is a model of a squadron of aircraft performing battlefield air interdiction. Four system identification techniques: Maximum Entropy, Compartmental Models, Canonical State-Space Models, and Hidden Markov Models (HMM), were applied to these simulation models. The system identification techniques were evaluated on how well their resulting abstractions replicated the distributions of the simulation states as well as the decision outputs. Encouraging results were achieved by the HMM technique applied to the Attrition Simulation--and by the Maximum Entropy technique applied to the Mission Simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe our simulation of the Intelligence, Surveillance and Reconnaissance--Tasking, Processing, Exploitation and Dissemination chain. Model formulation is based on analytical descriptions of ISR-TPED processes, which allows evaluation of the statistical variability in model output within a single computational pass. Significant gains in model execution speed are achieved with this approach, especially when compared to the more commonly used technique of discrete event simulation. This allows the simulation user to rapidly identify major performance drivers in novel TPED configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The model abstraction problem is explored from a real-time network environment perspective, usually admitting different system models (such as queue-theoretic) at its different equilibrium states. To computationally depict system states at any level of abstraction, it is necessary to identify correct models consistent with observables. However, any such system identification need not be permanent, particularly for a dynamic system. In such situations, as the system appears to migrate from one equilibrium state to another, one should be able to quickly identify an event of context-switching from one model abstraction to another. In this paper we show how, using a variation of traditional CUSUM statistical approaches, one could identify model change events on time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terrain database generation is one of the most expensive tasks in the development of human-in-the-loop visual simulations. There are many factors associated with the efficiency of generating the terrain database. Automating the process of extracting from remote sensing imagery the required database primitives, and constructing detailed 3D feature models offers many challenging problems. Another problem is to simplify the terrain model by using fewer polygons without significant loss in the visual characteristics of the rendered imagery, thereby reducing the complexity of the terrain database and improving real- time rendering performance. In this paper we present two surface simplification algorithms designed for the purpose of constructing a terrain database that is optimized for driving simulation; one using a bottom-up, polygonal refinement approach, the other using a top-down, polygonal removal approach. These two algorithms are applied to terrain surfaces that include integrated, `stitched-in' road features, and are used to generate terrain surfaces of various levels of detail. We provide a discussion on the design of these two algorithms, some experimental results of applying the algorithms to real terrain surfaces, as well as the comparison of the two approaches on the factors of height error and the distance from the road.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaboration of experts from different domains within an enterprise has always posed logistical and knowledge management challenges to managers and members of the collaboration. Scheduling meetings, arranging travel, getting data and information into the right hands at the right time all require time, money and energy that could be better spent on product development. Advances in information technology have made it easier to communicate to solve, or at least mitigate, some of these problems using e-mail, audio conferencing, and database management software, but a great detail of human intervention is still required to make these collaborations operate smoothly. Over the past ten years enterprises have come to require more than just total asset visibility and human communication capabilities. To design and field products better, faster and cheaper more human creativity and energy must be focused on the products and less on the operation of the collaboration. The collaborative environment solutions of the future must not only provide the communication and knowledge management that exist today, but also provide seamless access to resources and information, product and process modeling and the advanced decision support that results from the availability of necessary resources and information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visage-Link takes the next step in the paradigm shift defined by the Visage information architecture, leveraging an information centric user interface approach in order to facilitate collaboration among geographically distributed users. Extending Visage's notion of polymorphism to the collaborative realm, Visage-link allows a user to visualize a shared data set using views tailored to his or her individual role. Basic research extends this concept to operate on devices smaller in nature and utilized for very user specific roles within operational exercises with varying levels of connectivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the introduction of new systems, we only have a few paradigms to guide us. One that is currently popular now is the `Silicon Valley Startup' paradigm; where you get an idea for a product, get a few young people (paid with stock options) to put a version of its together and five months later, put it on the Internet. If it flies, you IPO and everyone gets rich. However enticing, this paradigm only works if this new system is pretty standalone; that is, its value is only in itself and not how it enhances the value of a system of interdependent systems. For instance, the latter would be the case if we were trying to analyze the benefits of a new type of weapons system. For this analysis we must look at the context presented to our system and how its response affects the context the other systems see. The issue is that these contexts have a very large amount of uncertainty. We will describe how the Dynamic Focusing Architecture can guide through the uncertainty to discover the underlying key issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interactive simulation environment for training (and/or analysis) of military operations is presented as an example of specific methodology utilization. The phases of the methodology are presented: the general description of a conflict, the conflict model as non-coalition 3-person game, the model of battle process--the multidimensional stochastic process (DC class), the decision model--the multiple stage stochastic optimization problem, the computer environment for simulation of combat process, the experiments, monitoring and visualization phase, the post-simulation analysis. A military conflict can be generally described and the sides of conflict can be identified with their structure, warfare, and states of the sides, location, their missions and so on. A theoretical game is considered as a basic model of a military conflict. The stochastic model expresses the uncertainty in a conflict situation. The transition between the stochastic model and simulation model is shown. The global decision problem is formulated for each side as multistage stochastic programming problem where a risk function is considered as the criterion. Each component of the conflict is described as an object. The objects' behavior during the gaming is represented as a simulation process. The environment proposed is built as an opened system and can be developed and improved including new combat models, unit structures, tactical rules and more monitored characteristics. Possible directions of the development and utilization of the computer environment are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, many behavior implementation technologies are available for modeling human behaviors in Department of Defense (DOD) computerized systems. However, it is commonly known that any single currently adopted behavior implementation technology is not so capable of fully representing complex and dynamic human decision-making and cognition behaviors. The author views that the current situation can be greatly improved if multiple technologies are integrated within a well designed overarching architecture that amplifies the merits of each of the participating technologies while suppressing the limitations that are inherent with each of the technologies. COREBA uses an overarching behavior integration architecture that makes the multiple implementation technologies cooperate in a homogeneous environment while collectively transcending the limitations associated with the individual implementation technologies. Specifically, COREBA synergistically integrates Artificial Intelligence and Complex Adaptive System under Rational Behavior Model multi-level multi- paradigm behavior architecture. This paper will describe applicability of COREBA in DOD domain, behavioral capabilities and characteristics of COREBA and how the COREBA architectural integrates various behavior implementation technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The intelligence agent architecture widely employs methods of logic of belief. The goal of the paper is to find a correct and effective inference mechanism that can substantially improve resolution based traditional methods. The semantics of the mechanism is based on Minsky's frames. Each agent is modeled by Minsky's frames with their slots representing what agent believes in. Inference process is realized by daemons filling the frames slots. The filling in this context means setting unknown slot values. The order of reasoning is established by a directed acyclic graph and driven by the topological sorting as a reasoning strategy. The inference algorithm analysis shows that the new method works in polynomial time. Therefore it is more efficient than NP, resolution based traditional methods. The correctness of object oriented implementation of the algorithm is established by considering the inference process in terms of abstract relational systems and their isomorphisms. Finally an implementation methodology of agents and their inference process in object oriented language is presented. All the considered concepts and methodology are illustrated in object oriented solution to `three wisemen problem' implemented in Smalltalk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaborative engineering and development are paramount to supporting new warfighter-driven programs like Simulation Based Acquisition and Simulation Based Design. The Collaborative Enterprise Environment (CEE), under development by AFRL, enhances Directorate research and development, exploration, evaluation, planning and transition of technologies by enabling collaboration and technology integration. A number of algorithms and models have been implemented under a variety of DoD programs using Khoros, a powerful software development and visual programming environment that facilitates the integration of legacy code as well as the development of new solutions. This paper discusses how Khoros is being extended to operate within the CEE, seamlessly supporting collaborative development and component reuse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's modeling and simulation community is faced with the problem of developing and managing large complex system models comprised of a diverse set of subsystem component models. These component models may be described using varying amounts of detail and fidelity as well as differing modeling paradigms. Often, a complex simulation comprised of high fidelity subcomponent models may result in a more detailed system model than the simulation objective requires. Simulating such a system model results in a waste of simulation time with respect to addressing the simulation goals. One way to avoid wasting simulation cycles is to reduce the complexity of subcomponent models while not affecting the desired simulation objective. The process of reducing the complexity of these subcomponent models is known as abstract modeling. Abstract modeling reduces the subcomponent model complexity by eliminating, grouping, or estimating model parameters or variables at a less detailed level without grossly affecting the simulation results. One key issue concerning model abstracting is identifying the variables or parameters that can be abstracted away for a given simulation objective. This paper presents an approach to identifying candidate variables for model abstraction when considering typical C4ISR (Command, Control, Computers, Communications, Intelligence, Surveillance, and Reconnaissance) hardware systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.