Proceedings Volume 4026

Enabling Technology for Simulation Science IV

cover
Proceedings Volume 4026

Enabling Technology for Simulation Science IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 June 2000
Contents: 9 Sessions, 29 Papers, 0 Presentations
Conference: AeroSense 2000 2000
Volume Number: 4026

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Multiresolution Models, Families of Models, and Their Relationship to Exploratory Analysis
  • Advanced Visualization for Modeling and Simulation
  • High-Level Architecture (HLA)
  • Reuse Architectures and Repositories
  • Model Abstraction Techniques
  • Model Abstraction Applications
  • Collaboration Techniques and Applications
  • Simulation Paradigms
  • Cognitive Process Modeling
  • Collaboration Techniques and Applications
  • Model Abstraction Techniques
Multiresolution Models, Families of Models, and Their Relationship to Exploratory Analysis
icon_mobile_dropdown
Multiresolution multiperspective modeling (MRMPM) as an enabler of exploratory analysis
Exploratory analysis examines the consequences of uncertainty--not merely by standard sensitivity methods, but more comprehensively. It is particularly useful for gaining a broad understanding of a problem domain before dipping into details. Although exploratory analysis can be accomplished with models of many types, it is facilitated by multiresolution, multiperspective modeling (MRMPM) structures. Moreover, a knowledge of related design principles facilitates the characterization of more normal models in terms that permit exploratory analysis. This paper describes the connections and notes that, with current and emerging personal computer tools, MRMPM methods are becoming practical.
Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models
Jimmie McEver, Paul K. Davis, James H. Bigelow
We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.
Case history of using entity-level simulation as imperfect experimental data for informing and calibrating simpler analytical models for interdiction
James H. Bigelow, Paul K. Davis, Jimmie McEver
We have used detailed, entity-level models to simulate the effects of long-range precision fires employed against an invader. Results show that these fires are much less effective against dispersed formations marching through mixed terrain than against dense formations in open terrain. We expected some loss of effectiveness, but not as much as observed. So we built a low resolution model (PEM, or PGM Effectiveness Model) and calibrated it to the results of the detailed simulation. PEM explains analytically how various situational and tactical factors, which are usually treated only in complex models, can influence the effectiveness of these fires. The variables we consider are characteristics of the C4ISR system (e.g., time of last update), missile and weapon characteristics (e.g., footprint), maneuver pattern of the advancing column (e.g., vehicle spacing), and aggregate terrain features (e.g., open versus mixed terrain).
Clustering methods for multiresolution simulation modeling
Christos G. Cassandras, Christakis G. Panayiotou, Gregory Diehl, et al.
Simulation modeling of complex systems is receiving increasing research attention over the past years. In this paper, we discuss the basic concepts involved in multi- resolution simulation modeling of complex stochastic systems. We argue that, in many cases, using the average over all available high-resolution simulation results as the input to subsequent low-resolution modules is inappropriate and may lead to erroneous final results. Instead high- resolution output data should be classified into groups that match underlying patterns or features of the system behavior before sensing group averages to the low-resolution modules. We propose high-dimensional data clustering as a key interfacing component between simulation modules with different resolutions and use unsupervised learning schemes to recover the patterns for the high-resolution simulation results. We give some examples to demonstrate our proposed scheme.
Advanced Visualization for Modeling and Simulation
icon_mobile_dropdown
Reconfigurable simulation visualizer
Jason A. Moore, Chad Salisbury
In the past, visualization systems have been constructed to work with a single application or venue. By dismissing the notion, a visualizer can be created which is flexible enough to be configured to the user's viewing requirements just prior to execution. Going one step further, requirements that change during execution can also be addressed by this visualizer as the analysis of a simulation progresses. Allowing the user to reconfigure their visualization style, mode and information provides a more flexible method of optical analysis than previously possible.
Use of 3D metaphor in programming
John F. Hopkins, Paul A. Fishwick
The use of metaphor in programming can be a powerful aid to the programmer, inasmuch as it provides concrete properties to abstract ideas. In turn, these concrete properties can aid recognition of, and reasoning about, programming problems. Another potential benefit of the use of metaphor in programming is the improvement of mental retention of facts and solutions to programming problems. Traditionally, programs have been produced in a textual medium. However, a textual medium may be inferior to a 3D medium in the development and use of metaphor, as the concrete properties that metaphors provide are real-world phenomena, which are naturally 3D. An example of the use of 3D metaphors in programming was created. This consisted of a mock operating system task scheduler, along with some associated hardware devices, developed in a VRML environment using VRML PROTO nodes. These nodes were designed as objects based on real- world metaphors. The issues, problems, and novelties involved in programming in this manner were explored.
Virtual reality simulation mechanism on WWW
Fei Wang, Yuncheng Feng, Youshuang Wei
This paper addresses a fundamental, easy but powerful mechanism of Virtual Reality Simulation system on World Wide Web. The basic idea is to use Virtual Reality Modeling Language (VRML) to build the virtual world, and design a specific simulator to complete the common simulation work and drive the VR animation. According to the achievable mathematics and animation function, two types of this VR Simulation system are founded. The first one is to use VRMLScript or JavaScript to code the specific simulator. This mechanism really can be realized, however, the mathematical operations and the simulation model scale are limited. The other is to apply Java for coding the simulator, then use HTML to combine the VR scene and the simulator in the same WebPage, which can harmonize the VR animation running according to the simulation logic. Because Java is fully mathematics functioned and the Java code modules are entirely reusable, this VR simulation system, which is mainly recommended, can be easily realized on desktop PC and meet the basic interactive requirements of VR Technology without any extra hardware. A VR M/M/1/k Queuing simulation system is given to explain the mechanism. Finally, the overall Integrated Development Environment for this VR simulation system is also discussed.
Simulation of laser detection and ranging (LADAR) and forward-looking infrared (FLIR) data for autonomous tracking of airborne objects
Gavin Powell, Keith C. Markham, David Marshall
This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.
High-Level Architecture (HLA)
icon_mobile_dropdown
Embedded real-time high-level architecture (HLA) application case study
This paper presents the Joint Communication Simulator (JCS) system design as a case study of both the conceptual and implementation applicability of High Level Architecture (HLA) in this difficult context. Specific technical topics to be covered include an overview of JCS requirements, an overview of the modeling concept and system architecture in terms of the HLA, a definition of the subset of Run Time Infrastructure (RTI) functionality and HLA interface specification applied, and an overview of the RTI subset implementation. In addition, it addresses the political questions of HLA compliance, the openness of RTI designs and implementation, and the issue of RTI certification.
Conceptual FOM design tool
Lee S. Krause, Carla L. Burns
This paper discusses the research currently in progress to develop the Conceptual Federation Object Model Design Tool. The objective of the Conceptual FOM (C-FOM) Design Tool effort is to provide domain and subject matter experts, such as scenario developers, with automated support for understanding and utilizing available HLA simulation and other simulation assets during HLA Federation development. The C-FOM Design Tool will import Simulation Object Models from HLA reuse repositories, such as the MSSR, to populate the domain space that will contain all the objects and their supported interactions. In addition, the C-FOM tool will support the conversion of non-HLA legacy models into HLA- compliant models by applying proven abstraction techniques against the legacy models. Domain experts will be able to build scenarios based on the domain objects and interactions in both a text and graphical form and export a minimal FOM. The ability for domain and subject matter experts to effectively access HLA and non-HLA assets is critical to the long-term acceptance of the HLA initiative.
Runtime services for sharing high-performance distributed data structures among HLA simulations
Donald M. Leskiw, Junmei Zhau
This paper reports on results from an ongoing project to develop methods for representing and managing multiple, concurrent levels of modeling detail and enabling high performance computing, namely parallel processing, within object-based simulation frameworks such as HLA. We present here the interface structure and runtime support service concept for using parallel arrays for high performance computing within distributed object-based simulation frameworks. The approach employs a distributed array descriptor, which can be a basis for extending the HLA standard to provide support for efficiently sharing very large data arrays or sub-arrays among federates. The goal is to reduce communications overhead and thereby improve simulation performance involving C4ISR models that require, for example, interpolation and extrapolation of large data sets, such as those that naturally occur for overlay, coupling, and fusion of phenomenology information in multi- sensor networks.
Reuse Architectures and Repositories
icon_mobile_dropdown
Digital object multimodel simulation formalism and architecture
Robert M. Cubert, Paul A. Fishwick
The object-oriented approach known as heterogeneous behavior multimodeling has been developed, used, and reported elsewhere, to facilitate creation, modification, sharing, and reuse of object-oriented models and the simulations created from those models. The digital object extends multimodeling so that digital objects can be shared and combined in ways that ordinary multimodels cannot. We describe an abstract base multimodel and several derived instantiated multimodel types. We also describe a transformation which takes a digital object to a simulation program. We give formal definitions of multimodeling, digital object, and the transformation, then from these definitions prove correctness of execution sequencing of simulations created by applying the transformation to digital objects. Closure under coupling of digital objects follows as a corollary, subject to an assumption regarding experimental frame. We then construct an abstract base architecture for manufacture, flow, and persistence of digital objects. From the base architecture we derive and instantiate a suite of architectures, each targeted at a distinct set of requirements: one to operate locally, another with internet protocols, a third with web protocols, and a fourth to allow digital objects to interoperate with other kinds of simulations.
Virtual Targets Center: a working example of efficient distribution of modeling and simulation resources
Stephanie E. Brown
The Virtual Targets Center is a strategic alliance between STRICOM's Targets Management Office and AMCOM's Systems Simulation and Development Directorate. The Virtual Target Center reduces duplication of effort by making DoD owned geometry models available for reutilization. The mission of the Virtual Targets Center is to support the modeling and simulation community by collecting, creating and distributing geometry models in multiple formats applicable to a wide range of simulation activities.
Baobab: a software architecture and methodology for distributed simulation and interaction
Jared Rosoff
We present Baobab, a software architecture and methodology for distributed simulation and interaction. Using pervasive componentization throughout the system, Baobab provides a stable but extensible platform for the development of content-rich interactive simulation. Entities in the environment are simulated using dynamically loadable simulation modules (shared libraries, java byte codes, scripts, etc...). We provide an elegant API to the simulation module developer, allowing modules to interact with entities which they have never encountered before. This approach allows domain experts to develop simulation modules based on their expertise with limited knowledge of the inner-workings of a VE system.
Model Abstraction Techniques
icon_mobile_dropdown
Air Force hierarchy of models: a look inside the great pyramid
The widely-used Air Force hierarchy of models and simulations is generally depicted as a four-level pyramid; ranging from Engineering/Component Level up to Theater/Campaign Level. While it does present a concise picture of the scope of military models and simulations, it gives the impression that there is a smooth and natural transition from one level to the next. That is not the case. In fact, there is a great variance in degree of complexity from one level to the next. This paper looks at the state- of-practice in modeling and simulation in the context of this hierarchy; and in particular, at traditional and revolutionary techniques involving inter-level relationships.
Model abstraction results using state-space system identifications
In this paper we report on state-space system identification approaches to dynamic behavioral abstraction of military simulation models. Two stochastic simulation models were identified under a variety of scenarios. The `Attrition Simulation' is a model of two opposing forces with multiple weapon system types. The `Mission Simulation' is a model of a squadron of aircraft performing battlefield air interdiction. Four system identification techniques: Maximum Entropy, Compartmental Models, Canonical State-Space Models, and Hidden Markov Models (HMM), were applied to these simulation models. The system identification techniques were evaluated on how well their resulting abstractions replicated the distributions of the simulation states as well as the decision outputs. Encouraging results were achieved by the HMM technique applied to the Attrition Simulation--and by the Maximum Entropy technique applied to the Mission Simulation.
Kalman approach to accuracy management for interoperable heterogeneous model abstraction within an HLA-compliant simulation
Donald M. Leskiw, Junmei Zhau
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Model Abstraction Applications
icon_mobile_dropdown
Neural network submodel as an abstraction tool: relating network performance to combat outcome
Greg Jablunovsky, Clark Dorman, Paul S. Yaworsky
Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.
Modeling and simulation of the ISR tasking, processing, exploitation, and dissemination (TPED) process
James B. Kraiman, Brad Kingston, James Muccio
We describe our simulation of the Intelligence, Surveillance and Reconnaissance--Tasking, Processing, Exploitation and Dissemination chain. Model formulation is based on analytical descriptions of ISR-TPED processes, which allows evaluation of the statistical variability in model output within a single computational pass. Significant gains in model execution speed are achieved with this approach, especially when compared to the more commonly used technique of discrete event simulation. This allows the simulation user to rapidly identify major performance drivers in novel TPED configurations.
Model abstractions for real-time network environments
Bruno R. Andriamanalimanana, Saumen S. Sengupta, Joe Riolo, et al.
The model abstraction problem is explored from a real-time network environment perspective, usually admitting different system models (such as queue-theoretic) at its different equilibrium states. To computationally depict system states at any level of abstraction, it is necessary to identify correct models consistent with observables. However, any such system identification need not be permanent, particularly for a dynamic system. In such situations, as the system appears to migrate from one equilibrium state to another, one should be able to quickly identify an event of context-switching from one model abstraction to another. In this paper we show how, using a variation of traditional CUSUM statistical approaches, one could identify model change events on time.
Two-surface simplification algorithms for polygonal terrain with integrated road features
Guy A. Schiavone, Ying Dai, Grace Yu, et al.
Terrain database generation is one of the most expensive tasks in the development of human-in-the-loop visual simulations. There are many factors associated with the efficiency of generating the terrain database. Automating the process of extracting from remote sensing imagery the required database primitives, and constructing detailed 3D feature models offers many challenging problems. Another problem is to simplify the terrain model by using fewer polygons without significant loss in the visual characteristics of the rendered imagery, thereby reducing the complexity of the terrain database and improving real- time rendering performance. In this paper we present two surface simplification algorithms designed for the purpose of constructing a terrain database that is optimized for driving simulation; one using a bottom-up, polygonal refinement approach, the other using a top-down, polygonal removal approach. These two algorithms are applied to terrain surfaces that include integrated, `stitched-in' road features, and are used to generate terrain surfaces of various levels of detail. We provide a discussion on the design of these two algorithms, some experimental results of applying the algorithms to real terrain surfaces, as well as the comparison of the two approaches on the factors of height error and the distance from the road.
Collaboration Techniques and Applications
icon_mobile_dropdown
Framework solutions for complete collaborative environments
Vance M. Saunders, Derek Maddox
Collaboration of experts from different domains within an enterprise has always posed logistical and knowledge management challenges to managers and members of the collaboration. Scheduling meetings, arranging travel, getting data and information into the right hands at the right time all require time, money and energy that could be better spent on product development. Advances in information technology have made it easier to communicate to solve, or at least mitigate, some of these problems using e-mail, audio conferencing, and database management software, but a great detail of human intervention is still required to make these collaborations operate smoothly. Over the past ten years enterprises have come to require more than just total asset visibility and human communication capabilities. To design and field products better, faster and cheaper more human creativity and energy must be focused on the products and less on the operation of the collaboration. The collaborative environment solutions of the future must not only provide the communication and knowledge management that exist today, but also provide seamless access to resources and information, product and process modeling and the advanced decision support that results from the availability of necessary resources and information.
Visage-Link: a medium for distributed collaboration in information-intensive domains
Mark D. Foresti
Visage-Link takes the next step in the paradigm shift defined by the Visage information architecture, leveraging an information centric user interface approach in order to facilitate collaboration among geographically distributed users. Extending Visage's notion of polymorphism to the collaborative realm, Visage-link allows a user to visualize a shared data set using views tailored to his or her individual role. Basic research extends this concept to operate on devices smaller in nature and utilized for very user specific roles within operational exercises with varying levels of connectivity.
Simulation Paradigms
icon_mobile_dropdown
Using the dynamic focusing architecture for analysis of systems of systems
Thomas C. Fall
For the introduction of new systems, we only have a few paradigms to guide us. One that is currently popular now is the `Silicon Valley Startup' paradigm; where you get an idea for a product, get a few young people (paid with stock options) to put a version of its together and five months later, put it on the Internet. If it flies, you IPO and everyone gets rich. However enticing, this paradigm only works if this new system is pretty standalone; that is, its value is only in itself and not how it enhances the value of a system of interdependent systems. For instance, the latter would be the case if we were trying to analyze the benefits of a new type of weapons system. For this analysis we must look at the context presented to our system and how its response affects the context the other systems see. The issue is that these contexts have a very large amount of uncertainty. We will describe how the Dynamic Focusing Architecture can guide through the uncertainty to discover the underlying key issues.
Methodology of modeling and interactive simulation of combat processes for CAX and DSS
Andrzej Najgebauer, Tadeusz Nowicki
The interactive simulation environment for training (and/or analysis) of military operations is presented as an example of specific methodology utilization. The phases of the methodology are presented: the general description of a conflict, the conflict model as non-coalition 3-person game, the model of battle process--the multidimensional stochastic process (DC class), the decision model--the multiple stage stochastic optimization problem, the computer environment for simulation of combat process, the experiments, monitoring and visualization phase, the post-simulation analysis. A military conflict can be generally described and the sides of conflict can be identified with their structure, warfare, and states of the sides, location, their missions and so on. A theoretical game is considered as a basic model of a military conflict. The stochastic model expresses the uncertainty in a conflict situation. The transition between the stochastic model and simulation model is shown. The global decision problem is formulated for each side as multistage stochastic programming problem where a risk function is considered as the criterion. Each component of the conflict is described as an object. The objects' behavior during the gaming is represented as a simulation process. The environment proposed is built as an opened system and can be developed and improved including new combat models, unit structures, tactical rules and more monitored characteristics. Possible directions of the development and utilization of the computer environment are discussed.
Cognitive Process Modeling
icon_mobile_dropdown
COREBA (cognition-oriented emergent behavior architecture)
S. David Kwak
Currently, many behavior implementation technologies are available for modeling human behaviors in Department of Defense (DOD) computerized systems. However, it is commonly known that any single currently adopted behavior implementation technology is not so capable of fully representing complex and dynamic human decision-making and cognition behaviors. The author views that the current situation can be greatly improved if multiple technologies are integrated within a well designed overarching architecture that amplifies the merits of each of the participating technologies while suppressing the limitations that are inherent with each of the technologies. COREBA uses an overarching behavior integration architecture that makes the multiple implementation technologies cooperate in a homogeneous environment while collectively transcending the limitations associated with the individual implementation technologies. Specifically, COREBA synergistically integrates Artificial Intelligence and Complex Adaptive System under Rational Behavior Model multi-level multi- paradigm behavior architecture. This paper will describe applicability of COREBA in DOD domain, behavioral capabilities and characteristics of COREBA and how the COREBA architectural integrates various behavior implementation technologies.
Object-oriented reasoning in cognitive systems
Janusz Korniak
The intelligence agent architecture widely employs methods of logic of belief. The goal of the paper is to find a correct and effective inference mechanism that can substantially improve resolution based traditional methods. The semantics of the mechanism is based on Minsky's frames. Each agent is modeled by Minsky's frames with their slots representing what agent believes in. Inference process is realized by daemons filling the frames slots. The filling in this context means setting unknown slot values. The order of reasoning is established by a directed acyclic graph and driven by the topological sorting as a reasoning strategy. The inference algorithm analysis shows that the new method works in polynomial time. Therefore it is more efficient than NP, resolution based traditional methods. The correctness of object oriented implementation of the algorithm is established by considering the inference process in terms of abstract relational systems and their isomorphisms. Finally an implementation methodology of agents and their inference process in object oriented language is presented. All the considered concepts and methodology are illustrated in object oriented solution to `three wisemen problem' implemented in Smalltalk.
Collaboration Techniques and Applications
icon_mobile_dropdown
CEE interfacing for Khoros: visual interactive programming for enterprise research (VIPER)
Kevin J. Farrar, Jong S. Hwang
Collaborative engineering and development are paramount to supporting new warfighter-driven programs like Simulation Based Acquisition and Simulation Based Design. The Collaborative Enterprise Environment (CEE), under development by AFRL, enhances Directorate research and development, exploration, evaluation, planning and transition of technologies by enabling collaboration and technology integration. A number of algorithms and models have been implemented under a variety of DoD programs using Khoros, a powerful software development and visual programming environment that facilitates the integration of legacy code as well as the development of new solutions. This paper discusses how Khoros is being extended to operate within the CEE, seamlessly supporting collaborative development and component reuse.
Model Abstraction Techniques
icon_mobile_dropdown
Assessing candidates for model abstraction
Richard A. MacDonald, Robert M. McGraw
Today's modeling and simulation community is faced with the problem of developing and managing large complex system models comprised of a diverse set of subsystem component models. These component models may be described using varying amounts of detail and fidelity as well as differing modeling paradigms. Often, a complex simulation comprised of high fidelity subcomponent models may result in a more detailed system model than the simulation objective requires. Simulating such a system model results in a waste of simulation time with respect to addressing the simulation goals. One way to avoid wasting simulation cycles is to reduce the complexity of subcomponent models while not affecting the desired simulation objective. The process of reducing the complexity of these subcomponent models is known as abstract modeling. Abstract modeling reduces the subcomponent model complexity by eliminating, grouping, or estimating model parameters or variables at a less detailed level without grossly affecting the simulation results. One key issue concerning model abstracting is identifying the variables or parameters that can be abstracted away for a given simulation objective. This paper presents an approach to identifying candidate variables for model abstraction when considering typical C4ISR (Command, Control, Computers, Communications, Intelligence, Surveillance, and Reconnaissance) hardware systems.