Proceedings Volume 6227

Enabling Technologies for Simulation Science X

cover
Proceedings Volume 6227

Enabling Technologies for Simulation Science X

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 4 May 2006
Contents: 6 Sessions, 22 Papers, 0 Presentations
Conference: Defense and Security Symposium 2006
Volume Number: 6227

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • High-Level Decision Making: Issues and Answers
  • Technologies for Training
  • M&S Frameworks and Architectures
  • M&S S&T Requirements
  • Tools and Techniques for Societal and Human Behavior Modeling
  • Dealing with System Complexity
High-Level Decision Making: Issues and Answers
icon_mobile_dropdown
Theory and methods for supporting high-level decision making
Paul K. Davis, James P. Kahan
High-level decision makers face complex strategic issues and decision support for such individuals needs to be topdown, and to use representations natural to their level and particular styles. Decision support should focus on objectives; uncertainties, which are often both large and deep; risks; and how to do well despite the uncertainties and risks. This implies that decision support should help identify flexible, adaptive, and robust strategies (FAR strategies), not strategies tuned to particular assumptions. Decision support should also have built-in zoom capability, since decision makers sometimes need to know the underlying basis for assessments in order to review and alter assumptions, and to communicate a concern about details that encourages careful work. These requirements apply to both strategic planning (e.g., force planning in DoD or the Services) and operations planning (e.g., a commander's war planning). This paper discusses how to meet the requirements and implications for further research and enabling technology.
Technologies for Training
icon_mobile_dropdown
Using GPU-generated virtual video stream for multi-sensor system
Security and intelligence services are increasingly turning toward multi-sensor video surveillance which requires human ability to successfully fuse and comprehend the information provided by videos. A training system using the same front end as real multi-sensor system for users can significantly increase such human ability. The training system always needs scenarios replicating stressful situations which are videotaped in advance and played later. This not only puts a limitation on the training scenarios but also brings a high cost. This paper introduces a new framework, virtual video capture device for such training system. Using the latest graphics processing units (GPUs) technology, multiple video streams composed of computer graphics (CG) are generated on one high-end PC and ublished to a video stream server. Thus users can be trained using both real video streams and virtual video streams on one system. It also enables the training system to use real video streams incorporating augmented reality to improve situation awareness of the human.
Increasing combat realism: the effectiveness of stun belt use on soldiers for the enhancement of live training and testing exercises
Bradley C. Schricker, Christopher Antalek
The ability to make correct decisions while operating in a combat zone enables American and Coalition warfighters to better respond to any threats they may encounter due to the minimization of negative training the warfighter encountered during their live, virtual, and constructive (LVC) training exercises. By increasing the physical effects encountered by one's senses during combat scenarios, combat realism is able to be increased, which is a key component in the reduction in negative training. The use of LVC simulations for training and testing augmentation purposes depends on a number of factors, not the least of which is the accurate representation of the training environment. This is particularly true in the realm of tactical engagement training through the use of Tactical Engagement Simulation Systems (TESS). The training environment is perceived through human senses, most notably sight and hearing. As with other haptic devices, the sense of touch is gaining traction as a viable medium through which to express the effects of combat battle damage from the synthetic training environment to participants within a simulated training exercise. New developments in this field are promoting the safe use of an electronic stun device to indicate to a trainee that they have been hit by a projectile, from either direct or indirect fire, through the course of simulated combat. A growing number of examples suggest that this added output medium can greatly enhance the realism of a training exercise and, thus, improve the training value. This paper serves as a literature survey of this concept, beginning with an explanation of TESS. It will then focus on how the electronic stun effect may be employed within a TESS and then detail some of the noted pros and cons of such an approach. The paper will conclude with a description of potential directions and work.
Tag-n-track system for situation awareness for MOUTs
Rakesh Kumar, Manoj Aggarwal, Thomas E. Germano, et al.
In order to train war fighters for urban warfare, live exercises are held at various Military Operations on Urban Terrain (MOUT) facilities. Commanders need to have situation awareness (SA) of the entire mock battlefield, and also the individual actions of the various war fighters. The commanders must be able to provide instant feedback and play through different actions and 'what-if' scenarios with the war fighters. The war fighters in their turn should be able to review their actions and rehearse different maneuvers. In this paper, we describe the technologies behind a prototype training system, which tracks war fighters around an urban site using a combination of ultra-wideband (UWB) Radio Frequency Identification (RFID) and smart video based tracking. The system is able to: (1) Tag each individual with an unique ID using an RFID system, (2) Track and locate individuals within the domain of interest, (3) Associate IDs with visual appearance derived from live videos, (4) Visualize movement and actions of individuals within the context of a 3D model, and (5) Store and review activities with (x,y,ID) information associated with each individual. Dynamic acquisition and recording of the precise location of individual troops and units during training greatly aids the analysis of the training sessions allowing improved review, critique and instruction.
M&S Frameworks and Architectures
icon_mobile_dropdown
A framework for adaptive modeling and ontology-driven simulation (FAMOS)
Perakath Benjamin, Michael Graul
This paper describes the motivations, methods, and solution concepts of a novel Framework for Adaptive Modeling and Ontology-driven Simulation (FAMOS). FAMOS uses a hybrid approach that combines ontology and process analysis methods with ontology-driven translation generation techniques to facilitate (1) robust simulation composability analysis and (2) semantic modeling and simulation interoperability. FAMOS provides enabling technology that addresses the technical challenges in three areas: (1) Modeling and Simulation Composability, (2) Semantic Interoperability and Information Sharing, and (3) Model Composition at Multiple Levels of Abstraction. The paper will (1) outline the technical challenges targeted by our research, (2) describe the FAMOS Ontology-driven Simulation Application Integration (OSAI) Method, and (3) introduce the FAMOS solution architecture that provides automated support for the OSAI method.
A framework for modeling and simulation at multiple levels of abstraction
Michael Graul, Perakath Benjamin, Mukul Patki, et al.
This paper identifies and addresses the issues associated with modeling and simulation and multiple levels of abstraction, or multi-resolution modeling (MRM). An extensive literature review is conducted to encompass all schools of thought in the area into this research. We begin by outlining the need for MRM and describe the problems encountered when two or more models developed at different resolutions are to be integrated into a single application. These problems can manifest themselves in different ways in the model, depending on the specific phenomenon being modeled. A distinction is made in identifying these manifestations based on whether the underlying model is a process model such as an IDEF3 model, or an executable simulation model. Heuristic approaches have developed to assist with different aspects of model composability efforts. Finally, a rule-based approach has been developed to identify any such problems, or abstraction mismatches, that may occur if the two models are integrated into a single application. A conceptual description of these rules and their motivation is provided.
Security simulation for vulnerability assessment
Brian Hennessey, Bradley Norman, Robert B. Wesson
This paper discusses simulation technologies developed to "stimulate" an operational command and control security system. The paper discusses simulation techniques used to create a virtual model of a facility in which to conduct vulnerability assessment exercises, performance benchmarking, CONOPS development and operator training. The paper discusses the specific techniques used for creating a 3d virtual environment and simulating streaming IP surveillance cameras and motion detection sensors. In addition the paper discusses advanced scenario creation techniques and the modeling of scenario entities, including vehicles, aircraft and personnel. The paper draws parallels with lessons learned in using Air Traffic Control simulators for operator training, incident recreation, procedure development and pre acquisition planning and testing.
Transitioning the DSAP infrastructure to a web service environment
RAM Laboratories and AFRL are developing a software infrastructure to provide a Dynamic Situation Assessment and Prediction (DSAP) capability through the use of an embedded simulation infrastructure that can be linked to real-time Command, Control, Communications, and Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) sensors and systems and Command and Control (C2) activities. The resulting capabilities will allow Commanders to evaluate and analyze Courses of Action and potential alternatives through real-time and faster-than-real-time simulation via executing multiple plans simultaneously across a computing grid. In order to support users in a distributed C2 operational capacity, the DSAP infrastructure is being web-enabled to support net-centric services and common data formats and specifications that will allow it to support users on the Global Information Grid. This paper reviews DSAP and its underlying Multiple Replication Framework architecture and discusses steps that must be taken to play in a Service-Oriented Architecture.
Effectiveness measurements and state estimation simulation for DSAP
For Air Operations Centers, there is a need to provide Commanders and their staff with real-time, up-to-the-second information regarding Red-Force, Blue-Force, and neutral force status and positioning. These updates of the real-time picture provide Command Staff with dynamic situational awareness of their operations while considering current and future Courses of Action (COAs). A key shortfall in current capability is that intelligence, surveillance, and reconnaissance (ISR) sensors, electronic intelligence, and human intelligence only provide a snapshot of the operational world from "observable" inputs. While useful, this information only provides a subset of the entire real-time picture. To provide this "missing" information, techniques are required to estimate the state of Red, Blue, and neutral force assets and resources. One such technique for providing this "state" information is to utilize operationally focused simulation to estimate the unobservable data. RAM Laboratories and the Air Force Research Laboratory's Information Systems Research Branch are developing a Dynamic Situation Assessment and Prediction (DSAP) Software Framework that, in part, utilizes embedded real-time simulation in this manner. This paper examines enhancements made to the DSAP infrastructure's Multiple Replication Framework (MRF) and reviews extensions made to provide estimated state information via calibrated real-time simulation. This paper also provides an overview of the Effectiveness Metrics that can be used to evaluate plan effectiveness with respect to the realtime inputs, simulated plan, and user objectives.
Polymorphic collaboration in the global grid
Next generation collaborative systems must be able to represent the same information in different forms on a broad spectrum of devices and resources from low end personal digital assistants (PDA) to high performance computers (HPC). Users might be on a desktop then switch to a laptop and then to a PDA while accessing the global grid. The user preference profile for a collaboration session should be capable of moving with them as well as be automatically adjusted for the device type. Collaborative systems must be capable of representing the same information in many forms for different domains and on many devices and thus be polymorphic. Polymorphic collaboration will provide an ability for multiple heterogeneous resources (human to human, human to machine and machine to machine) to share information and activities, as well as the ability to regulate collaborative sessions based on client characteristics and needs; reuse user profiles, tool category choices, and settings in future collaboration session by same or different users; use intelligent agents to assist collaborative systems in learning user/resource preferences and behaviors, and autonomously derive optimal information to provide to users and decision makers. This paper discusses ongoing research in next generation collaborative environments with the goal of making electronic collaboration as easy to use as the telephone - collaboration at the touch of the screen.
M&S S&T Requirements
icon_mobile_dropdown
High-priority areas for army battle command (BC)-related modeling and simulation (M&S) science and technology (S&T)
William S. Murphy Jr., Joe Foreman, Dean S. Hartley III, et al.
The Battle Command, Simulation, and Experimentation (BCSE) Directorate in the Army's Office of the Deputy Chief of Staff, G-3/5/7, has taken the initiative to identify high priority options for Battle Command (BC)-related modeling and simulation (M&S) science and technology (S&T). In an earlier paper, the methodology that was developed to identify those options was described (Ref 1). This paper summarizes the insights that were developed through the application of that methodology. It identifies and describes the high priority M&S S&T options in the following categories: modeling methodology, development methodology, computational capability, and data/information understanding. For each of the thirteen M&S S&T options in those categories, the activity is described, sources for identifying needs are cited, baseline activities are described, relationship among the M&S S&T areas are identified, and specific initiatives are proposed. The paper concludes by identifying alternative M&S S&T packages consistent with selected priority criteria. The results of this analysis will be used to convey investment priorities to the Deputy Assistant Secretary of the Army for Research & Technology.
Development of metrics to access command, control, and communications (C3) performance
Martin R. Stytz, Sheila B. Banks
The US military is undertaking an uncertain and far-reaching transformation in its adoption of a network centric operational philosophy. This transformation will maximize the military's reliance upon data superiority and decision superiority. However, we have yet to develop the doctrine and data insights needed to fully exploit the network-centric capabilities being developed. We require a means for assessing the performance impact of tradeoffs in computational power and network bandwidth. These network and computational management assessments would allow us to address issues associated with network-centric operational needs and insure that the right data reaches the right user at the right time. The research that we report addresses the development of metrics to assess the impact of policy choices and uncover the requirements for effective network-centric operations.
Federated executable architecture technology as an enabling technology for simulation of large systems
Gregory A. Harrison, Russell Hutt, Howard S. Kern, et al.
Federated Executable Architecture Technology is an enabling technology, supportive of the construction, execution and analysis of application-oriented simulations. Using HLA to link distributed simulations allows a hybrid system to be created that best simulates a particular scenario. Under control of a top-level Executable Architecture representation of a particular scenario or application, the various models of entities in the system interoperate to test, verify and validate the static architecture of the system. Various resolutions of models may be used throughout to provide appropriate simulation of doctrine and decisions as well as entities in the scenario. Messaging in the scenario between the entities and the decision swimlanes are modeled in appropriate federated networking simulators. As a whole, the ability to bring a static architecture to life through simulation allows optimization of the doctrine reflected in the architecture. We present the results of applying FEAT to simulation of a large-scale training exercise and show how it can be used to enhance the integration and composition of training events.
Tools and Techniques for Societal and Human Behavior Modeling
icon_mobile_dropdown
A qualitative multiresolution model for counterterrorism
This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.
A simulated force generator with an adaptive command structure
The Force Laydown Automated Generator (FLAG) is a script-driven behavior model that automatically creates military formations from the platoon level up to division level for use in simulations built on the FLAMES simulation framework. The script allows users to define formation command structure, command relationships, vehicle type and equipment, and behaviors. We have used it to automatically generate more than 3000 units in a single simulation. Currently, FLAG is used in the Air Force Research Laboratory Munitions Directorate (AFRL/MN) to assist their Comprehensive Analysis Process (CAP). It produces a reasonable threat laydown of red forces for testing their blue concept weapons. Our success in the application of FLAG leads us to believe that it offers an invaluable potential for use in training environments and other applications that need a large number of reactive, adaptive forces - red or blue.
Using GOMS and Bayesian plan recognition to develop recognition models of operator behavior
Jack D. Zaientz, Elyon DeKoven, Nicholas Piegdon, et al.
Trends in combat technology research point to an increasing role for uninhabited vehicles in modern warfare tactics. To support increased span of control over these vehicles human responsibilities need to be transformed from tedious, error-prone and cognition intensive operations into tasks that are more supervisory and manageable, even under intensely stressful conditions. The goal is to move away from only supporting human command of low-level system functions to intention-level human-system dialogue about the operator's tasks and situation. A critical element of this process is developing the means to identify when human operators need automated assistance and to identify what assistance they need. Toward this goal, we are developing an unmanned vehicle operator task recognition system that combines work in human behavior modeling and Bayesian plan recognition. Traditionally, human behavior models have been considered generative, meaning they describe all possible valid behaviors. Basing behavior recognition on models designed for behavior generation can offers advantages in improved model fidelity and reuse. It is not clear, however, how to reconcile the structural differences between behavior recognition and behavior modeling approaches. Our current work demonstrates that by pairing a cognitive psychology derived human behavior modeling approach, GOMS, with a Bayesian plan recognition engine, ASPRN, we can translate a behavior generation model into a recognition model. We will discuss the implications for using human performance models in this manner as well as suggest how this kind of modeling may be used to support the real-time control of multiple, uninhabited battlefield vehicles and other semi-autonomous systems.
Foresight for commanders: a methodology to assist planning for effects-based operations
Paul K. Davis, James P. Kahan
Looking at the battlespace as a system of systems is a cornerstone of Effects-Based Operations and a key element in the planning of such operations, and in developing the Commander's Predictive Environment. Instead of a physical battleground to be approached with weapons of force, the battlespace is an interrelated super-system of political, military, economic, social, information and infrastructure systems to be approached with diplomatic, informational, military and economic actions. A concept that has proved useful in policy arenas other than defense, such as research and development for information technology, addressing cybercrime, and providing appropriate and cost-effective health care, is foresight. In this paper, we provide an overview of how the foresight approach addresses the inherent uncertainties in planning courses of action, present a set of steps in the conduct of foresight, and then illustrate the application of foresight to a commander's decision problem. We conclude that foresight approach that we describe is consistent with current doctrinal intelligence preparation of the battlespace and operational planning, but represents an advance in that it explicitly addresses the uncertainties in the environment and planning in a way that identifies strategies that are robust over different possible ground truths. It should supplement other planning methods.
Dealing with System Complexity
icon_mobile_dropdown
Visual unified modeling language for the composition of scenarios in modeling and simulation systems
Michael L. Talbert, Daniel E. Swayne
The Department of Defense uses modeling and simulation systems in many various roles, from research and training to modeling likely outcomes of command decisions. Simulation systems have been increasing in complexity with the increased capability of low-cost computer systems to support these DOD requirements. The demand for scenarios is also increasing, but the complexity of the simulation systems has caused a bottleneck in scenario development due to the limited number of individuals with knowledge of the arcane simulator languages in which these scenarios are written. This research combines the results of previous efforts from the Air Force Institute of Technology in visual modeling languages to create a language that unifies description of entities within a scenario with its behavior using a visual tool that was developed in the course of this research. The resulting language has a grammar and syntax that can be parsed from the visual representation of the scenario. The language is designed so that scenarios can be described in a generic manner, not tied to a specific simulation system, allowing the future development of modules to translate the generic scenario into simulation system specific scenarios.
Data modeling predictive control theory for deriving real-time models from simulations
Holger Jaenisch, James Handley, Mike Hicklen, et al.
This paper presents the mathematical framework and procedure for extracting differential equation based models from High-Fidelity Real-Time and Non Real-Time models for use in hyper-real-time simulation. Our approach captures a series of input/output scenario frames and derives analytical transfer function models from these examples. The result is a coupled set of differential equations that are integrated in real-time or analytically solved into polynomial form for Volterra type solution in real-time. The resulting model numerically yields the same answer on training inputs as the model was derived from, and yields nonlinear interpolated transfer functions in frequency space for off-nominal cases. Since the upper and lower error bounds and their variance are predictable, the derived model can maintain accreditation without implicit caveats. This allows the derived model to be executed in freeform when departures from intended uses are necessary but accreditation boundaries must not be violated.
Verifying end-to-end system performance with the transformational information extraction model
In the intelligence community, the volume of imagery data threatens to overwhelm the traditional process of information extraction. Satellite systems are capable of producing large quantities of imagery data every day. Traditionally, intelligence analysts have the arduous task of manually reviewing satellite imagery data and generating information products. In a time of increasing imagery data, this manual approach is not consistent with the goal of a timely and highly responsive system. These realities are key factors in Booz Allen Hamilton's transformational approach to information extraction. This approach employs information services and value added processes (VAP) to reduce the amount of data being manually reviewed. Booz Allen has utilized a specialization/generalization hierarchy to aggregate hundreds of thousands of imagery intelligence needs into sixteen information services. Information Services are automated by employing value added processes, which extract the information from the imagery data and generate information products. While the intelligence needs and information services remain relatively static in time, the VAP's have the ability to evolve rapidly with advancing technologies. The Booz Allen Transformational Information Extraction Model validates this automated approach by simulating realistic system parameters. The functional flow model includes image formation, three information services, six VAP's, and reduced manual intervention. Adjustable model variables for VAP time, VAP confidence, number of intelligence analyst, and time for analyst review provide a flexible framework for modeling different system cases. End-to-End system metrics such as intelligence need satisfaction, end-to-end timeliness, and sensitivity to number of analyst and VAP variables quantify the system performance.
Migrating modeling and simulation applications on to high performance computers
Mark D. Barnell, Brian J. Rahn
The modeling and simulation efforts within Air Force Research Lab (AFRL) Sensors Directorate (SN) and Information Directorate (IF) have become increasingly complex. In order to develop, test, and analyze surveillance assets complex simulations and Matlab tools are developed to provide a better understanding of the environment. The increasing amount of primary and secondary data used to emulate the real world further increases the demands on systems used to simulate the environment. One option to overcome some of the computer resources needed to solve the problem is to leverage High Performance Computers (HPC). Within AFRL several approaches have been taken, we will focus on two; Matlab MPI and STAR-P. Both applications leverage Matlab and allow the user to have a familiar interface while expanding the computational limits of the users desktop and creating an environment that will run on a HPC. In the world of parallel computing the Message Passing Interface (MPI) is the de facto standard for implementing programs on multiple processors. MPI defines C, C++ and Fortran language functions for doing point-to-point communication in a parallel program. MatlabMPI is set of Matlab scripts that implement a subset of MPI and allow any Matlab program to be run on a parallel computer. STAR-P currently has a Matlab interface that combines all four parallel approaches in one environment: embarrassingly parallel, message passing, backend support and compilation. Where possible, STAR-P leverages established parallel numerical libraries to perform numerical computation. STAR-P integrates these wide ranges of linear algebra and other routines seamlessly with Matlab for the user. The end result gives the user a familiar interface, Matlab, and ability to use the HPC resources (CPU's and Memory).
Accelerated modeling and simulation with a desktop supercomputer
Eric J. Kelmelis, John R. Humphrey, James P. Durbano, et al.
The performance of modeling and simulation tools is inherently tied to the platform on which they are implemented. In most cases, this platform is a microprocessor, either in a desktop PC, PC cluster, or supercomputer. Microprocessors are used because of their familiarity to developers, not necessarily their applicability to the problems of interest. We have developed the underlying techniques and technologies to produce supercomputer performance from a standard desktop workstation for modeling and simulation applications. This is accomplished through the combined use of graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and standard microprocessors. Each of these platforms has unique strengths and weaknesses but, when used in concert, can rival the computational power of a high-performance computer (HPC). By adding a powerful GPU and our custom designed FPGA card to a commodity desktop PC, we have created simulation tools capable of replacing massive computer clusters with a single workstation. We present this work in its initial embodiment: simulators for electromagnetic wave propagation and interaction. We discuss the trade-offs of each independent technology, GPUs, FPGAs, and microprocessors, and how we efficiently partition algorithms to take advantage of the strengths of each while masking their weaknesses. We conclude by discussing enhancing the computational performance of the underlying desktop supercomputer and extending it to other application areas.