Proceedings Volume 5091

Enabling Technologies for Simulation Science VII

cover
Proceedings Volume 5091

Enabling Technologies for Simulation Science VII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 4 September 2003
Contents: 15 Sessions, 48 Papers, 0 Presentations
Conference: AeroSense 2003 2003
Volume Number: 5091

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Modeling and Simulation for Predictive Battlespace Awareness
  • Enabling Technologies for Effects-Based Operations
  • Collaborative and Distributed Environments
  • Synthetic Environments and Virtual Testbeds
  • Technology for Wargaming Support
  • Modeling Adversarial Behavior
  • Human Behavior Representation for Computer-Generated Forces
  • Theoretical Foundations of Decision Support
  • Modeling and Optimization of Military Operations
  • Verification and Validation of Models and Simulations
  • Model Abstraction Techniques and Applications
  • Paradigms and Frameworks
  • XML-Based Simulation
  • Linguistic Geometry
  • Simulation in Acquisition
  • Enabling Technologies for Effects-Based Operations
Modeling and Simulation for Predictive Battlespace Awareness
icon_mobile_dropdown
Distributed collaborative environments for predictive battlespace awareness
The past decade has produced significant changes in the conduct of military operations: asymmetric warfare, the reliance on dynamic coalitions, stringent rules of engagement, increased concern about collateral damage, and the need for sustained air operations. Mission commanders need to assimilate a tremendous amount of information, make quick-response decisions, and quantify the effects of those decisions in the face of uncertainty. Situational assessment is crucial in understanding the battlespace. Decision support tools in a distributed collaborative environment offer the capability of decomposing complex multitask processes and distributing them over a dynamic set of execution assets that include modeling, simulations, and analysis tools. Decision support technologies can semi-automate activities, such as analysis and planning, that have a reasonably well-defined process and provide machine-level interfaces to refine the myriad of information that the commander must fused. Collaborative environments provide the framework and integrate models, simulations, and domain specific decision support tools for the sharing and exchanging of data, information, knowledge, and actions. This paper describes ongoing AFRL research efforts in applying distributed collaborative environments to predictive battlespace awareness.
Dynamic situation assessment and prediction (DSAP)
The face of war has changed. We no longer have the luxury of planning campaigns against a known enemy operating under a well-understood doctrine, using conventional weapons and rules of engagement; all in a well-charted region. Instead, today's Air Force faces new, unforeseen enemies, asymmetric combat situations and unconventional warfare (Chem/Bio, co-location of military assets near civilian facilities, etc.). At the same time, the emergence of new Air Force doctrinal notions (e.g., Global Strike Task Force, Effects-Based Operations, the desire to minimize or eliminate any collateral damage, etc.)- while propounding the benefits that can be expected with the adoption of such concepts - also impose many new technical and operational challenges. Furthermore, future mission/battle commanders will need to assimilate a tremendous glut of available information, and still be expected to make quick-response decisions - and to quantify the effects of those decisions - all in the face of uncertainty. All these factors translate to the need for dramatic improvements in the way we plan, rehearse, execute and dynamically assess the status of military campaigns. This paper addresses these crucial and revolutionary requirements through the pursuit of a new simulation paradigm that allows a user to perform real-time dynamic situation assessment and prediction.
Theoretical underpinnings of predictive battlespace awareness
This paper attempts to address current issues as they pertain to planning tools used to support the allocation of resources over time in a battlespace environment. It is aimed at developing a mathematical foundation for planning tools that more accurately depict the future unfolding of a dynamic battlespace. It demonstrates that, when the planning process is formulated as an optimal control problem, the requirements for an embedded prediction system become clear and distinct.
Enabling Technologies for Effects-Based Operations
icon_mobile_dropdown
Effects-based planning with strategy templates and semantic support
Justin Donnelly, Gary Edwards, Pete Haglich, et al.
To implement effects-based operations, Joint Air Operations planners must think in terms of achieving desired effects in the strategic campaign through operational course of action levels of planning. The strategy development tools discussed in this paper were designed specifically to encourage effects-based thinking. The tools are used to build plans, plan fragments and, most importantly, “strategy templates”. Strategy templates are knowledge-level skeletal planning models that guide the design of strategies that specify the necessary mechanisms and actions to achieve desired effects in the battlespace. The strategic planning knowledge captured in the templates may be employed through wizards to help human planners rapidly apply these general strategic models to specific planning problems. To support the abstract concepts required in the templates, and to guide plan authors in applying these abstract templates to real battlespace planning problems and data, we employ a semantic engine to support the tool capabilities. This engine exploits ontologies represented in the DARPA Agent Markup Language (DAML) and employs the Java Expert System Shell (Jess) as the inference engine to implement the axioms and theorems that encapsulate the DAML semantics. This paper will discuss this technology in supporting Effects-based Operations and its application into Command and Control for Joint Air Operations for kinetic and non-kinetic military operations.
Effects-based strategy development through center of gravity and target system analysis
Christopher M. White, Michael Prendergast, Nicholas Pioch, et al.
This paper describes an approach to effects-based planning in which a strategic-theater-level mission is refined into operational-level and ultimately tactical-level tasks and desired effects, informed by models of the expected enemy response at each level of abstraction. We describe a strategy development system that implements this approach and supports human-in-the-loop development of an effects-based plan. This system consists of plan authoring tools tightly integrated with a suite of center of gravity (COG) and target system analysis tools. A human planner employs the plan authoring tools to develop a hierarchy of tasks and desired effects. Upon invocation, the target system analysis tools use reduced-order models of enemy centers of gravity to select appropriate target set options for the achievement of desired effects, together with associated indicators for each option. The COG analysis tools also provide explicit models of the causal mechanisms linking tasks and desired effects to one another, and suggest appropriate observable indicators to guide ISR planning, execution monitoring, and campaign assessment. We are currently implementing the system described here as part of the AFRL-sponsored Effects Based Operations program.
Multiagent intelligent systems
Lee S. Krause, Christopher Dean, Lynn A. Lehman
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of “what if” analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the “family” would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent “publishes” its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
Collaborative and Distributed Environments
icon_mobile_dropdown
Distributed collaborative environments for virtual capability-based planning
Distributed collaboration is an emerging technology that will significantly change how decisions are made in the 21st century. Collaboration involves two or more geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions. The marriage of information, collaboration, and simulation technologies provides the decision maker with a collaborative virtual environment for planning and decision support. This paper reviews research that is focusing on the applying open standards agent-based framework with integrated modeling and simulation to a new Air Force initiative in capability-based planning and the ability to implement it in a distributed virtual environment. Virtual Capability Planning effort will provide decision-quality knowledge for Air Force resource allocation and investment planning including examining proposed capabilities and cost of alternative approaches, the impact of technologies, identification of primary risk drivers, and creation of executable acquisition strategies. The transformed Air Force business processes are enabled by iterative use of constructive and virtual modeling, simulation, and analysis together with information technology. These tools are applied collaboratively via a technical framework by all the affected stakeholders - warfighter, laboratory, product center, logistics center, test center, and primary contractor.
Analysis of a JBI pub/sub architecture's infrastructure requirements
A Distributed Information Enterprise Modeling and Simulation (DIEMS) framework, presently under development, is applied to the analysis of a Joint Battlespace Infosphere (JBI) Pub/Sub architecture's infrastructural requirements. This analysis is an example of one methodology that can be employed utilizing the DIEMS framework. This analysis capability permits the information systems engineer to ensure that the planned JBI architecture deployment will provide the required information exchange performance on the infrastructure provided. This paper describes the DIEMS framework including its application in constrained and unconstrained resource utilization modes. A JBI architecture is evaluated in the context of a representative operational scenario on one infrastructure. The simulator's unconstrained resource mode is employed to identify the architecture's ideal operational requirements and in turn identify potential resource limitations. The constrained simulation mode is employed to evaluate the potential choke points in relation to the architecture's performance. The results identify the infrastructure changes required so that the specific JBI architecture will achieve the required operational performance.
Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems
Charles E. Lucas, Eric A. Walters, Juri Jatskevich, et al.
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
Design of a software framework to support live/virtual training on distributed terrain
Guy A. Schiavone, Judd Tracy, Eric Woodruff, et al.
In this paper we describe research and development on the concept and application of distributed terrain and distributed terrain servers to support live/virtual training operations. This includes design of a distributed, cluster-capable “Combat Server” for the virtual representation and simulation of live training exercises, and current work to support virtual representation and visualization of live indoor operations involving firefighters, SWAT teams and/or special operations forces. The Combat Server concept under development is for an object-oriented, efficient and flexible distributed platform designed for simulation and training. It can operate on any compatible, high performance computer for which the software is compliant; however, it is explicitly designed for distribution and cooperation of relatively inexpensive clustered computers, together playing the role of a large independent system. The design of the Combat Server aims to be generic and encompass any situation that involves monitoring, tracking, assessment, visualization and, eventually, simulated interactivity to compliment real-world training exercises. To accomplish such genericity, the design must incorporate techniques such as layering or abstraction to remove any dependencies on specific hardware, such as weapons, that are to eventually be employed by the system; this also includes entity tracking hardware interfaces, whether by GPS or Ultra-Wide Band technologies. The Combat Server is a framework. Its design is a foothold for building a specialized distributed system for modeling a particular style of exercise. The combat server can also be a software development framework, providing a platform for building specialized exercises while abstracting the developer from the minutia of building a real-time distributed system. In this paper we review preliminary experiments regarding basic line-of-sight (LOS) functions of the combat server functionality and scalability in a cluster computing environment. Our initial results show that computing LOS between entities on a distributed terrain scales well over a large number of processors.
Coalition readiness management system preliminary interoperability experiment (CReaMS PIE)
Peter Clark, Peter Ryan, Lucien Zalcman, et al.
The United States Navy (USN) has initiated the Coalition Readiness Management System (CReaMS) Initiative to enhance coalition warfighting readiness through advancing development of a team interoperability training and combined mission rehearsal capability. It integrates evolving cognitive team learning principles and processes with advanced technology innovations to produce an effective and efficient team learning environment. The JOint Air Navy Networking Environment (JOANNE) forms the Australian component of CReaMS. The ultimate goal is to link Australian Defence simulation systems with the USN Battle Force Tactical Training (BFTT) system to demonstrate and achieve coalition level warfare training in a synthetic battlespace. This paper discusses the initial Preliminary Interoperability Experiment (PIE) involving USN and Australian Defence establishments.
Synthetic Environments and Virtual Testbeds
icon_mobile_dropdown
Virtual Testbed Aerospace Operations Center (VT-AOC)
The Air Force is conducting research in new technologies for next-generation Aerospace Operations Centers (AOCs). The Virtual Testbed Aerospace Operations Center (VT-AOC) will support advanced research in information technologies that operate in or are closely tied to AOCs. The VT-AOC will provide a context for developing, demonstrating, and testing new processes and tools in a realistic environment. To generate the environment, the VT-AOC will incorporate multiple mixed-resolution simulations that are capable of driving existing and future AOC command and control (C2) systems. The VT-AOC will provide the capability to capture existing or proposed C2 processes and then evaluate them operating in conjunction with new technologies. The VT-AOC will also be capable of connecting with other facilities to support increasingly more complex experiments and demonstrations. Together, these capabilities support key initiatives such as Agile Research and Development/Science and Technology (R&D/S&T), Predictive Battlespace Awareness, and Effects-Based Operations.
Joint synthetic battlespace for decision support (JSB-DS)
Joint Synthetic Battlespace for Decision Support (JSB-DS) is a developing set of concepts and an affiliated prototype environment with a goal of investigating the nature of decision support within a Command and Control (C2) context. To date, this investigation has focused on processing raw operational data into decision quality information and then presenting that information in a format that is useful and intuitive to a decision maker. The JSB-DS prototype was developed to support experimentation involving visual representation of, and interaction with, operational information. JSB-DS's prototype environment utilizes mission level battlefield simulations as a means to investigate decision and visualization aids with respect to situation awareness and reduction in decision timelines. These distributed simulations support dynamic re-tasking of Intelligence, Surveillance and Reconnaissance (ISR) and airborne strike assets within a Time Critical Target (TCT) prosecution vignette. The JSB-DS environment can serve as a basis for testing C2/TCT processes, procedures and training.
Intelligent launch and range operations virtual testbed (ILRO-VTB)
Intelligent Launch and Range Operations Virtual Test Bed (ILRO-VTB) is a real-time web-based command and control, communication, and intelligent simulation environment of ground-vehicle, launch and range operation activities. ILRO-VTB consists of a variety of simulation models combined with commercial and indigenous software developments (NASA Ames). It creates a hybrid software/hardware environment suitable for testing various integrated control system components of launch and range. The dynamic interactions of the integrated simulated control systems are not well understood. Insight into such systems can only be achieved through simulation/emulation. For that reason, NASA has established a VTB where we can learn the actual control and dynamics of designs for future space programs, including testing and performance evaluation. The current implementation of the VTB simulates the operations of a sub-orbital vehicle of mission, control, ground-vehicle engineering, launch and range operations. The present development of the test bed simulates the operations of Space Shuttle Vehicle (SSV) at NASA Kennedy Space Center. The test bed supports a wide variety of shuttle missions with ancillary modeling capabilities like weather forecasting, lightning tracker, toxic gas dispersion model, debris dispersion model, telemetry, trajectory modeling, ground operations, payload models and etc. To achieve the simulations, all models are linked using Common Object Request Broker Architecture (CORBA). The test bed provides opportunities for government, universities, researchers and industries to do a real time of shuttle launch in cyber space.
Technology for Wargaming Support
icon_mobile_dropdown
Designing a system-on-system wargame
David O. Ross
The need for a System-on-System Wargame is identified and a design and development approach outlined. The wargame is designed in a modular and fractal manner with three nets (C3ISR, system, and environment), with each entity represented on each net. The term “Third Generation Wargame” is clarified. Several contracts supporting this effort have been completed, other contracts are in progress, and still others are in planning or pending award. An internal project is also in progress.
Building intelligence in third-generation training and battle simulations
Dennis Jacobi, Don Anderson, Vance von Borries, et al.
Current war games and simulations are primarily attrition based, and are centered on the concept of force on force. They constitute what can be defined as “second generation” war games. So-called “first generation” war games were focused on strategy with the primary concept of mind on mind. We envision “third generation” war games and battle simulations as concentrating on effects with the primary concept being system on system. Thus the third generation systems will incorporate each successive generation and take into account strategy, attrition and effects. This paper will describe the principal advantages and features that need to be implemented to create a true “third generation” battle simulation and the architectural issues faced when designing and building such a system. Areas of primary concern are doctrine, command and control, allied and coalition warfare, and cascading effects. Effectively addressing the interactive effects of these issues is of critical importance. In order to provide an adaptable and modular system that will accept future modifications and additions with relative ease, we are researching the use of a distributed Multi-Agent System (MAS) that incorporates various artificial intelligence methods. The agent architecture can mirror the military command structure from both vertical and horizontal perspectives while providing the ability to make modifications to doctrine, command structures, inter-command communications, as well as model the results of various effects upon one another, and upon the components of the simulation. This is commonly referred to as “cascading effects,” in which A affects B, B affects C and so on. Agents can be used to simulate units or parts of units that interact to form the whole. Even individuals can eventually be simulated to take into account the affect to key individuals such as commanders, heroes, and aces. Each agent will have a learning component built in to provide “individual intelligence” based on experience.
Creating an AI modeling application for designers and developers
Ryan Houlette, Daniel Fu, Randy Jensen
Simulation developers often realize an entity's AI by writing a program that exhibits the intended behavior. These behaviors are often the product of design documents written by designers. These individuals, while possessing a vast knowledge of the subject matter, might not have any programming knowledge whatsoever. To address this disconnect between design and subsequent development, we have created an AI application whereby a designer or developer sketches an entity's AI using a graphical “drag and drop” interface to quickly articulate behavior using a UML-like representation of state charts. Aside from the design-level benefits, the application also features a runtime engine that takes the application's data as input along with a simulation or game interface, and makes the AI operational. We discuss our experience in creating such an application for both designer and developer.
Modeling Adversarial Behavior
icon_mobile_dropdown
Thoughts on higher-level adversary modeling
The advent of concepts such as effects-based operations and decision dominance has led to renewed interest in the modeling of adversaries. This think-piece discusses some of the issues involved in conceiving and implementing such models. In particular, it addresses what behaviors may be of interest, how models might be used in high-level decision support, alternative conceptual models, and possible simple implementations. It also touches on issues of multiresolution, multiperspective modeling (MRMPM), modularity, and reusability.
A cognitive architecture for adversary intent inferencing: structure of knowledge and computation
Existing target-based and objectives-based (“strategy-to-task”) approaches to mission planning do not explicitly address the adversary’s decision-making processes. Obviously, the adversary’s courses of action (COA) are influenced in a cause-and-effect manner by actions taken by friendly forces. Given the iterative/interleaved nature of actions taken by enemy and friendly forces, mission planning must clearly take adversarial decision making into account especially during concurrent mission planning and execution. Currently, adversarial behaviour with regards to cause-and-effect are difficult to account for within the framework of existing planning approaches. This paper describes a cognitive architecture for computationally modeling, predicting, and explaining adversarial behaviors and COAs and proposes an integrated framework for mission planning. Our framework fits naturally within the Effects-Based Operations (EBO) approach to mission planning.
Adversarial inferencing for generating dynamic adversary behavior
In the current world environment, the rapidly changing dynamics of organizational adversaries are increasing the difficulty for Military Analysts and Planners to accurately predict potential actions. As an integral part of the planning process, we need to assess our planning strategies against the range of potential adversarial actions. This dynamic world environment has established a necessity to develop tools to assist in establishing hypotheses for future adversary actions. Our research investigated the feasibility to utilize an adversarial tool as the core element within a predictive simulation to establish emergent adversarial behavior. It is our desire to use this intelligent adversary to generate alternative futures in performing Course of Action (COA) analysis. Such a system will allow planners to gauge and evaluate the effectiveness of alternative plans under varying actions and reactions. This research focuses on one of many possible techniques required to address the technical challenge of generating intelligent adversary behaviors. This development activity addresses two research components. First, establish an environment in which to perform the feasibility experiment and analysis. The proof of concept performed to analyze and assess this feasibility of utilizing an adversarial inferencing system to provide emergent adversary behavior is discussed. Second, determine if the appropriate interfaces can be reasonably established to provide integration with an existing force structure simulation framework. The authors also describe the envisioned simulation system and the software development performed to extend the inferencing engine and system interface toward that goal. The experimental results of observing emergent adversary behavior by applying the simulated COAs to the adversary model will be discussed. The research addresses numerous technological challenges in developing the necessary methodologies and tools for a software-based COA analysis framework utilizing intelligent adversarial intent.
Validated behavioral forecasting
Gregg Courand, Michael Fehling
We motivate new challenges for behavioral forecasting in the current era, and present illustrative results of our work. Our results are based on psycho-social theory, modeling and analysis methods developed by the authors. We are prototyping a technology to support users conducting this type of analysis.
Human Behavior Representation for Computer-Generated Forces
icon_mobile_dropdown
The role of the unified modeling language and the extensible markup language in computer-generated actor behavior evaluation
Sheila B. Banks, Martin R. Stytz
In spite of numerous efforts undertaken to develop processes and procedures for the test and evaluation of the performance and accuracy of computer-generated actors (CGAs), much remains to be done before we can confidently field CGAs that reliably provide a desired suite of behaviors. While it currently appears to be impossible to completely validate CGA behaviors, we believe that an experiment-based approach to CGA behavior evaluation can provide a high degree of confidence in the accuracy of the CGA behaviors and that, as a result, CGA behaviors can confidently be considered to be acceptable within the bounds of the testing and assessments related to its intended use. Clearly, exhaustive testing of all possible CGA behaviors is not feasible since exhaustive testing of CGA behaviors would be an even more computationally complex task than software testing; exhaustive software testing has proven to be impossible for any but the most trivial software. Therefore, our approach to addressing the CGA behavior test and evaluation challenge is based upon the use of the Unified Modeling Language (UML) and the eXtensible Markup Language (XML) to capture and describe the desired CGA behaviors. UML provides the capability to document the desired behavior from a number of perspectives and XML allows us to augment the UML documentation in a standard, open manner. The paper is organized as follows. Section One will contain an introduction to motivate our research and a discussion of the challenges that must be addressed to properly model human behavior and then test and evaluate CGA human behavior models. Section Two will contain a discussion of the relevant background technologies for our work. Section Three contains the discussion of our approach to CGA behavior testing, assessment, and evaluation and how we believe that UML and XML should be used for CGA behavior testing documentation. Section Four contains a summary and suggestions for further research.
An XML-based approach to knowledge base unification for computer-generated actors
Martin R. Stytz, Sheila B. Banks
A serious impediment to robust computer-generated actors (CGAs) is the cost of a knowledge acquisition for the CGA. There are a number of reasons for this difficulty; the chief challenge being the fact that currently each knowledge base is handcrafted for a CGA with little or no re-use of knowledge acquisition activity and the knowledge bases developed for other CGAs. Our research was undertaken to address the knowledge base development cost issue by devising an interoperable and open format for knowledge bases that can be used to represent a knowledge base in a reasoning system independent manner. While the format is reasoning system independent, there is one type of format for each type of reasoning system, the independence is found within each class of reasoning systems; such as fuzzy logic, Bayesian networks, rules, frames, and cased-based reasoning systems among others. In the paper, we will discuss our approach to developing the knowledge base format and our current specification for the format. The foundation for our approach to developing a knowledge base representation format that fosters re-use of the contents of knowledge bases is the eXtensible Markup Language (XML). Below, we review related work as well as provide a brief introduction to XML. We describe our requirements for the knowledge base representation and discuss our approach to developing the knowledge base format and our current specification for the format. We describe how we use the XML language to construct the unified knowledge base representation. The paper concludes with a short summary of the project and suggestions for future work.
Human error modeling in computer-generated forces: a team coordination analysis tool
Benjamin Bell, Jacqueline Scolaro
Understanding interactions among team members is critical for developing effective team-level training systems and for designing responsive decision support technologies. Communications among team members under exposure to various stressors is a research area that has received substantial experimental attention. Sustained access to human subjects, though, poses problems given the great demands on skilled practitioners' time. An approach to reducing reliance on human subjects while creating flexible research opportunities is to create a suite of human behavioral models, each representing a member of a team, and expose those models to various sources of stress in order to observe the response of each model and the overall patterns of team behavior. We present results of a preliminary effort in adopting this approach using Team Interaction Analysis with Reusable Agents (TIARA), an implementation platform for modeling interactive behaviors and human decision processes in stressful conditions. The current instantiation of TIARA models mission crew positions within the E2-C airborne early warning aircraft and allows for variable proficiency levels in one crew member. We describe our framework for controlling various stressors confronting the simulated crew and summarize our preliminary analysis of team behaviors and communication patterns arising from (1) the presence of those stressors; and (2) proficiency of the variable crew member.
Theoretical Foundations of Decision Support
icon_mobile_dropdown
Linquistic geometry: new technology for decision support
Linguistic Geometry (LG) is a revolutionary gaming approach which is ideally suited for military decision aids for Air, Ground, Naval, and Space-based operations, as well guiding robotic vehicles and traditional entertainment games. When thinking about modern or future military operations, the game metaphor comes to mind right away. Indeed, the air space together with the ground and seas may be viewed as a gigantic three-dimensional game board. Refining this picture, the LG approach is capable of providing an LG hypergame, that is, a system of multiple concurrent interconnected multi-player abstract board games (ABG) of various resolutions and time frames reflecting various kinds of hardware and effects involved in the battlespace and the solution space. By providing a hypergame representation of the battlespace, LG already provides a significant advance in situational awareness. However, the greatest advantage of the LG approach is an ability to provide commanders of campaigns and missions with decision options resulting in attainment of the commander's intent. At each game turn, an LG decision support tool assigns the best actions to each of the multitude of battlespace actors (UAVs, bombers, cruise missiles, etc.). This is done through utilization of algorithms finding winning strategies and tactics, which are the core of the LG approach.
Judgmental biases in decision support for strike operations
Human decisionmaking does not typically fit the classical analytic model, and the heuristics employed may yield a variety of biased judgments. These biases are often considered inherently adverse, but may be functional in some cases. Decision support systems can mitigate some biases, but often introduce others. “Debiasing” decision support systems entails designing DSS to address expected biases, and to preclude inducing new ones. High-level C2 decisionmaking processes are poorly understood, but these general principles and lessons learned in other fields are expected to obtain. A notional air campaign illustrates potential biases in a commander’s judgment during planning and execution, and the role of debiasing operational DSS.
A work-centered cognitively based architecture for decision support: the work-centered infomediary layer (WIL) model
Wayne Zachary, Robert Eggleston, Jason Donmoyer, et al.
Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.
Modeling and Optimization of Military Operations
icon_mobile_dropdown
A receding horizon approach for dynamic UAV mission management
We consider a setting where multiple UAVs form a team cooperating to visit multiple targets to collect rewards associated with them. The team objective is to maximize the total reward accumulated over a given time interval. Complicating factors include uncertainties regarding the locations of targets and the effectiveness of collecting rewards, differences among vehicle capabilities, and the fact that rewards are time-varying. We describe a Receding Horizon (RH) control scheme which dynamically assigns vehicles to targets and simultaneously determines associated trajectories. This scheme is based on solving a sequence of optimization problems over a planning horizon and executing them over a shorter action horizon. We also describe a simulated battlespace environment designed to test UAV team missions and to illustrate how the RH scheme can achieve optimal performance with high probability.
Model identification and optimization for operational simulation
Douglas A. Popken, Louis A. Cox
This paper describes initial research to define and demonstrate an integrated set of algorithms for conducting high-level Operational Simulations. In practice, an Operational Simulation would be used during an ongoing military mission to monitor operations, update state information, compare actual versus planned states, and suggest revised alternative Courses of Action. Significant technical challenges to this realization result from the size and complexity of the problem domain, the inherent uncertainty of situation assessments, and the need for immediate answers. Taking a top-down approach, we initially define the problem with respect to high-level military planning. By narrowing the state space we are better able to focus on model, data, and algorithm integration issues without getting sidetracked by issues specific to any single application or implementation. We propose three main functions in the planning cycle: situation assessment, parameter update, and plan assessment and prediction. Situation assessment uses hierarchical Bayes Networks to estimate initial state probabilities. A parameter update function based on Hidden Markov Models then produces revised state probabilities and state transition probabilities - model identification. Finally, the plan assessment and prediction function uses these revised estimates for simulation-based prediction as well as for determining optimal policies via Markov Decision Processes and simulation-optimization heuristics.
Verification and Validation of Models and Simulations
icon_mobile_dropdown
MRMAide model validation process
Maria Valinski, Robert M. McGraw
The modeling and simulation community has developed many processes for certifying the creditability of simulation models. Many of these processes are implemented throughout the model development cycle and require comparison of the developed model with the system being simulated. With the increased focus on model reuse, these processes need to be tailored to address the development cycle of reusing existing creditable models. This paper outlines the validation process that certifies the creditability of simulation models produced by the MRMAide (Mixed Resolution Modeling Aide). MRMAide is a technology that semi-automates the development of model wrappers. These wrappers are used to resolve fidelity differences between models using mixed resolution modeling (MRM) techniques that allow for the reuse of existing simulation models. The MRMAide validation process extrapolates on existing processes that are implemented throughout the development cycle of a simulation model and addresses MRMAide's development processes.
Model Abstraction Techniques and Applications
icon_mobile_dropdown
Mathematical foundations for modeling and simulation
Gwendolyn Walton, Brian F. Goldiez, Ronald Hofer, et al.
A tractable scientific basis is needed for M&S modeling, specification, abstraction, refinement, composition, and decomposition. While the component composition, abstraction, and refinement work of others will provide valuable insights, M&S fundamentals for physical systems must be applicable to the lowest level of knowledge about those systems. Starting with objects, components, or systems (such as the approaches from the software engineering and systems engineering literature) is too high a level. In addition, none of the published theory addresses the situational requirements issues of M&S in conjunction with the fundamental composition, abstraction, and refinement issues. As a result, M&S development, integration, and evolution are often ad hoc, based on ambiguous specifications. Additional theoretical and practical work is needed to support M&S. This paper provides an overview description of several fundamental M&S issues and outlines a recommended M&S foundations research agenda to address these issues.
Using MRMAide to create wrappers for mixed resolution modeling
Anthony Faulds, Robert M. McGraw
In today's environment of programming, many people do not program with plug-and-play components or mixed resolution modeling in mind. Yet much of the programming word today is involved in the redevelopment of models. Simulation models are not necessarily programmed in such a way that they easily plug into different programs. The development of the enabling technology named MRMAide is creating a user-friendly, and faster way to integrate models. It has three distinct advantages: 1) reuse of models in other simulations, 2) can plug in low fidelity models for back of the envelope calculations and verification, and 3) can plug high fidelity model into a low fidelity simulation. MRMAide is a gui based tool for C++ applications. This paper presents the concept and results of wrapping code so that mixed resolution modeling can be accomplished with less coding. The examples build from basic concepts to complex architectures. The first example is a unit conversion problem. The original program is written in feet and another program does some of the same calculations, but everything is done in inches. This example can be extrapolated into SI units vs. English units. Another example takes a military simulation and connects a new function to it. The current function takes no arguments but the plugged-in function requires azimuth, elevation, and a boolean for launch status. This requires the creation of stubs, using probability distributions, to feed values into the system. The final example is a high fidelity simulation in which a low fidelity model is plugged.
Using MRMAide for abstraction
Developing models for simulation is an arduous task. After building a high fidelity model, computation time can be prohibitive for general testing due to processing at higher levels of resolution. One way to address this problem is to develop abstract representations of the models that only consider “key” variables or parameters. For identifying these “key” variables or parameters, it may be desirable to determine the sensitivity of certain variables with respect to model outputs or response. One way of calculating the sensitivity of variables requires the analysis of output variables using clustering techniques. The MRMAide technology (MRMAide stands for Mixed Resolution Modeling Aide) employs a sensitivity analysis as an enabling technology that allows the program to test the sensitivity of certain variables and analyze the correlation of coupled variables. Using this tool helps the developer analyze how a model can be abstracted so that it can be rewritten to reduce the number of calculations but keeping an acceptable level of accuracy. Distributions can then be fed into these variables rather than calculating their values at each step resulting in a lower fidelity, yet fairly accurate representation for given operating conditions.
High-fidelity sensor modeling in mission-area simulations
Jeffrey T. Carlo, John E. Maher, Joseph E. Schneible
Research in advanced surveillance systems concepts is actively being pursued throughout DoD. One flexible and cost effective way of evaluating the mission effectiveness of these system concepts is through the use of mission level simulations. This approach enables the warfighter to “test drive” systems in their intended environment, without consuming time and money building and testing prototypes. On the other hand, due to the size and complexity of mission level simulations, the sensor modeling capability can be limited, to the point that significantly different system designs appear indistinguishable. To overcome this we have developed a meta-model approach which permits integrating most of the fidelity of radar engineering tools into mission-level simulations without significantly impacting the timeliness of the simulation. In this paper we introduce the SensorCraft concept and the engineering and mission level simulation tools that were employed to develop and model the concept. Then we present our meta model designs and show how they improve the fidelity of the mission level simulation.
Multiresolution modeling with a JMASS-JWARS high-level architecture (HLA) federation
Gary A. Plotz, John Prince
Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model are both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting “extremely large” simulation. One viable alternative is to “integrate” the current hierarchical suite of simulation models using the DoD's High Level Architecture (HLA) in order to support multi-resolution modeling. An HLA integration -- called a federation -- eliminates the problem of “extremely large” models, provides a well-defined and manageable mixed resolution simulation and minimizes Verification, Validation, and Accreditation (VV&A) issues. This paper describes the process and results of integrating the Joint Modeling and Simulation System (JMASS) and the Joint Warfare System (JWARS) simulations -- two of the Department of Defense's (DoD) next-generation simulations -- using a HLA federation.
Paradigms and Frameworks
icon_mobile_dropdown
An electronic notebook for physical system simulation
A scientist who sets up and runs experiments typically keeps notes of this process in a lab notebook. A scientist who runs computer simulations should be no different. Experiments and simulations both require a set-up process which should be documented along with the results of the experiment or simulation. The documentation is important for knowing and understanding what was attempted, what took place, and how to reproduce it in the future. Modern simulations of physical systems have become more complex due in part to larger computational resources and increased understanding of physical systems. These simulations may be performed by combining the results from multiple computer codes. The machines that these simulations are executed on are often massively parallel/distributed systems. The output result of one of these simulations can be a terabyte of data and can require months of computing. All of these things contribute to the difficulty of keeping a useful record of the process of setting up and executing a simulation for a physical system. An electronic notebook for physical system simulations has been designed to help document the set up and execution process. Much of the documenting is done automatically by the simulation rather than the scientist running the simulation. The simulation knows what codes, data, software libraries, and versions thereof it is drawing together. All of these pieces of information become documented in the electronic notebook. The electronic notebook is designed with and uses the eXtensible Markup Language (XML). XML facilitates the representation, storage, interchange, and further use of the documented information.
Comparison of number-theoretic and Monte Carlo methods in combat simulation
Number-theoretic methods (NTM) or quasi-Monte Carlo methods are a class of techniques to generate points of the uniform distribution in the s-dimensional unit cube. NTM is a special method, which represents a combination of number theory and numerical analysis. The uniformly scattered set of points in the unit cube obtained by NTM is usually called a set of quasi-random numbers or a number-theoretic net (NT-net), since it may used instead of random numbers in many statistical problems. NT-net can be defined as representative points of the uniform distribution. There are different criterions to measure uniformity and methods how to generate NT-nets. Theoretically the rate of convergence of the NTM is better when compared to the Monte Carlo method. The high-resolution force-on-force combat simulation is usually modeled as stochastic Monte Carlo type model and discrete event system. In high-resolution Monte Carlo combat simulations a large amount of random numbers has to be generated. In Monte Carlo type combat simulation models every unit has certain probabilities for detecting and affecting each enemy unit at each time interval. Usually Monte Carlo method is used to calculate expected value of some property of the model. This is matter of numerical integration with Monte Carlo method. In this paper the effectiveness of NTM's are compared with Monte Carlo method in simulated high-resolution combat simulation case. Some methods how to generate NT-nets are introduced. The estimates of NTM and Monte Carlo simulations are studied by comparing statistical properties of the estimates.
Automated scenario generation
This paper will discuss automated scenario generation (Sgen) techniques to support the development of simulation scenarios. Current techniques for scenario generation are extremely labor intensive, often requiring manual adjustments to data from numerous sources to support increasingly complex simulations. Due to time constraints this process often prevents the simulation of a large numbers of data sets and the preferred level of “what if analysis”. The simulation demands of future mission planning approaches, like Effects Based Operations (EBO), require the rapid development of simulation inputs and multiple simulation runs for those approaches to be effective. This paper will discuss an innovative approach to the automated creation of complete scenarios for mission planning simulation. We will discuss the results of our successful Phase I SBIR effort that validated our approach to scenario generation and refined how scenario generation technology can be directly applied to the types of problems facing EBO and mission planning. The current stovepipe architecture marries a scenario creation capability with each of the simulation tools. The EBO-Scenario generation toolset breaks that connection through an approach centered on a robust data model and the ability to tie mission-planning tools and data resources directly to an open Course Of Action (COA) analysis framework supporting a number of simulation tools. In this approach data sources are accessed through XML tools, proprietary DB structures or legacy tools using SQL and stored as an instance of Sgen Meta Data. The Sgen Meta Data can be mapped to a wide range of simulation tools using a Meta Data to simulation tools mapping editor that generates an XSLT template describing the required data translation. Once the mapping is created, Sgen will automatically convert the Meta Data instance, using XSLT, to the formats required by specific simulation tools. The research results presented in this paper will show how the complex demands of mission planning can be met with current simulation tools and technology.
Bridging the gap: simulations meet knowledge bases
Gary W. King, Clayton T. Morrison, David L. Westbrook, et al.
Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.
XML-Based Simulation
icon_mobile_dropdown
The dynamic multimodeling exchange language
The web has made it easy to create multimedia content, which is then viewable by the general community at large. By extending multimedia to include the area of modeling, we make it possible to share and process model structures in the same way as the typical web page. For models of the geometric variety, the new X3D (eXtensible 3D) standard will allow sharing and presentation of 3D scene graphs within the web browser. We have created a dynamic model counterpart to X3D, which we call DXL (Dynamics eXchange Language). DXL is low-level XML-based language, comprising blocks, ports, and connectors. We will define how DXL is used for constructing individual level models, as well as multimodels over multiple abstraction layers.
Enabling model customization and integration
Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.
Linguistic Geometry
icon_mobile_dropdown
LG-ANALYST: linguistic geometry for master air attack planning
We investigate the technical feasibility of implementing LG-ANALYST, a new software tool based on the Linguistic Geometry (LG) approach. The tool will be capable of modeling and providing solutions to Air Force related battlefield problems and of conducting multiple experiments to verify the quality of the solutions it generates. LG-ANALYST will support generation of the Fast Master Air Attack Plan (MAAP) with subsequent conversion into Air Tasking Order (ATO). An Air Force mission is modeled employing abstract board games (ABG). Such a mission may include, for example, an aircraft strike package moving to a target area with the opposing side having ground-to-air missiles, anti-aircraft batteries, fighter wings, and radars. The corresponding abstract board captures 3D air space, terrain, the aircraft trajectories, positions of the batteries, strategic features of the terrain, such as bridges, and their status, radars and illuminated space, etc. Various animated views are provided by LG-ANALYST including a 3D view for realistic representation of the battlespace and a 2D view for ease of analysis and control. LG-ANALYST will allow a user to model full scale intelligent enemy, plan in advance, re-plan and control in real time Blue and Red forces by generating optimal (or near-optimal) strategies for all sides of a conflict.
LG hypergames for effects-based operations
Boris Stilman, Vladimir Yakhnis, Oleg Umanskiy, et al.
The Linguistic Geometry (LG) approach provides theoretical foundations of the Effects Based Operations (EBO) such as cascade effects, centers of gravity (COG), effect inference, etc. This approach is based on the concept of LG hypergames. The first prototype of LG hypergame, LG-EBO, developed in 2001, demonstrated an LG-inference of direct and indirect effects from distant causes as well as reverse inference of minimal list of causes from the desired effects. In addition, LG-EBO was the first LG prototype capable of demonstrating deceptive tactics for both sides of a conflict. In this paper we further develop our approach to EBO by giving a comprehensive description of LG approach to EBO including theory and applications.
LG-GUARD for missile defense and offence
LG-GUARD employs a hierarchy of multi-resolution games (LG hypergame) to represent various areas of operations at different levels of detail. LG-GUARD includes a full implementation of advanced fire control by dynamic preemptive control of sensor-to-shooter and shooter-to-target pairing. However, the greatest advantage of LG-GUARD is a fast planning and re-planning based on the Linguistic Geometry (LG) approach. This ability allows LG-GUARD to generate COA aiming to achieve the commander's intent for the entire operation, vs. an ability to shoot as many targets as possible at each snapshot of a battle. LG-GUARD operates in two modes. The Planning Mode (long range planning) enabled LG-GUARD to automatically select best types, quantities, and locations for defensive assets from the entire area permitted for the operations of the Blue side to achieve a given probability of success (with as little total opportunity cost as possible). After selection and turning to the Engagement Mode (short range planning), LG-GUARD generates the best courses of action for all sides of the most probable operation (which involves defensive assets selected in the Planning Mode). The capabilities of LG-GUARD are shown in this paper by describing two kinds of scenarios, those executable now and those to be executable in the near future.
Simulation in Acquisition
icon_mobile_dropdown
Cost efficiency of simulation-based acquisition
Reduction of risks and acquisition delays is a major issue for procurement services, as it contributes directly to the cost and availability of the system. A new approach, know as simulation-based acquisition (SBA) has been used increasingly within the last past years. In this paper, we address cost-effectiveness issues of SBA. Using the standard cost estimates familiar to program managers, we show first that the cost overhead of using SBA instead of a “conservative” approach is cancelled and turned into a financial gain as soon as the first unforeseen event arises. Then, we show that reuse within SBA of a system-of-systems induces financial gains which give the design of the encompassing meta-system for free.
ModSAF-based development of operational requirements for light armored vehicles
John Rapanotti, Marc Palmarini
Light Armoured Vehicles (LAVs) are being developed to meet the modern requirements of rapid deployment and operations other than war. To achieve these requirements, passive armour is minimized and survivability depends more on sensors, computers, countermeasures and communications to detect and avoid threats. The performance, reliability, and ultimately the cost of these systems, will be determined by the technology trends and the rates at which they mature. Defining vehicle requirements will depend upon an accurate assessment of these trends over a longer term than was previously needed. Modelling and simulation are being developed to study these long-term trends and how they contribute to establishing vehicle requirements. ModSAF is being developed for research and development, in addition to the original requirement of Simulation and Modelling for Acquisition, Rehearsal, Requirements and Training (SMARRT), and is becoming useful as a means for transferring technology to other users, researchers and contractors. This procedure eliminates the need to construct ad hoc models and databases. The integration of various technologies into a Defensive Aids Suite (DAS) can be designed and analyzed by combining field trials and laboratory data with modelling and simulation. ModSAF (Modular Semi-Automated Forces,) is used to construct the virtual battlefield and, through scripted input files, a "fixed battle" approach is used to define and implement contributions from three different sources. These contributions include: models of technology and natural phenomena from scientists and engineers, tactics and doctrine from the military and detailed analyses from operations research. This approach ensures the modelling of processes known to be important regardless of the level of information available about the system. Survivability of DAS-equipped vehicles based on future and foreign technology can be investigated by ModSAF and assessed relative to a test vehicle. A vehicle can be modelled phenomenologically until more information is available. These concepts and approach will be discussed in the paper.
Acquisition-based simulation and modeling for multisensor systems
This paper will describe multi-sensor modeling and how its use is applicable throughout the acquisition process. Since the acquisition process and the consequential modeling and simulation efforts are results of the developed Operational Requirements Document (ORD), the development and essential information that should be included within the ORD will also be addressed. In addition, currently available modeling tools along with their individual tradeoffs and the assumptions that can be made in order to facilitate modeling where design or operational parameters do not exist will be discussed.
Enabling Technologies for Effects-Based Operations
icon_mobile_dropdown
Predictive battlespace awareness and effects-based operations from a Homeland Security perspective: a wargaming opportunity
James K. Williams, Zachary P. Hubbard
Effects Based Operations (EBO) and Predictive Battlespace Awareness (PBA) are intimately linked. Intelligence Preparation of the Battlespace (IPB), the predictive component of PBA, provides a structured analytical process for defining the battlespace environment, describing the battlespace effects that influence all sides, modeling the adversary, and determining likely enemy courses of action (COA). IPB documents some of the necessary elements of EBO, such as centers of gravity, counter-COAs, and indicators. The IPB process has been adapted to Information Operations (IO) through Intelligence Preparation of the Information Battlespace (IPIB), a prototype system for cyber-defense. IPIB ranks Enemy cyber-COAs and lists mission-critical network assets that must be defended. It is clear that IPIB can be inverted for developing COAs that implement EBO, and the prototype is being modified for offensive IO. Full-spectrum EBO would combine kinetic, cyber, and cognitive COAs to affect an adversary's behavior. This paper uses a Critical Infrastructure Protection (CIP) scenario to: 1) Provide an example of EBO-based PBA for CIP. 2) Illustrate the interaction between EBO and PBA. 3) Demonstrate the need for a national Critical Infrastructure vulnerability assessment. 4) Identify why simulation and wargaming are the most viable means of performing such an assessment.