KEYWORDS: Systems modeling, Modeling and simulation, Artificial intelligence, Machine learning, Intelligence systems, System identification, New and emerging technologies, Modeling, Computing systems, Clouds
Today’s battlefield increasingly incorporates emerging technologies using artificial intelligence. These systems not only provide unparalleled speed and accuracy, but also allow for digital models to be developed and tested in simulation prior to deployment, reducing the time and cost of acquisition. This holds additional promise for wargaming modeling and simulation for understanding the impact of complex, multi-domain operations on future force efficacy and structure. However, current modeling and simulation environments are not designed for simulating decentralized, intelligent systems at scale. Cloud computing has revolutionized how we scale computational capability, but was not designed for complex, low latency interactions between independently reasoning entities. This motivates new methods for characterizing and mitigating complexity to meet operational and mission requirements. We outline the challenges and opportunities for modeling and simulating large-scale multi-agent systems and identify future research areas that should address these challenges. We recommend that investment be placed in holistically understanding scalability from a cost-benefit perspective, measuring the impact on requirements, developing improved tools for understanding the dimensions of scalability, and formalizing specifications of the scalability requirements met (or not met) by available systems. We propose that a framework for reasoning over and adjusting the fidelity of various models within a system of systems is needed to meet development and testing requirements. Formal methods can be used to understand the limits on scalability as a function of objectives (e.g. speed, convergence, performance) and constraints (e.g. cost, compute, and time), optimizing resources to develop and test interacting artificial intelligence systems at scale.
Given that many readily available datasets consist of large amounts of unlabeled data,1 unsupervised learning methods are an important component of many data-driven applications. In many instances, ground-state truth labels may be unavailable or obtainable only at a costly expense. As a result, there is an acute need for the ability to understand and interpret unlabeled datasets as thoroughly as possible. In this article, we examine the effectiveness of learned deep embeddings via internal clustering metrics on a dataset comprised of unlabelled StarCraft 2 game replays. The results of this work indicate that the use of deep embeddings provides a promising basis for clustering and interpreting player behavior in complex game domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.