Mission-system planning: an application of multiresolution, multiperspective modeling, and exploratory analysis
Author(s):
Paul K. Davis
Show Abstract
This paper describes mission-system planning (MSP) and mission-system analysis (MSA). It relate their needs to two frontier subjects: multi resolution, multi perspective modeling (MRMPM) and exploratory analysis. After a brief explanation of mission-system planning, I describe an application: the mission of halting a mechanized invasion force with long-range fires such as fighter and bomber aircraft. The application involves defining the relevant system, decomposing it analytically, and assessing overall system effectiveness over a relevant scenario space. The appropriate decomposition depends on one's point of view and responsibilities, and may have both hierarchical and network aspects. The result is a need for multiple levels of resolution in each of several perspectives. Evaluation of system capabilities then requires models. Strategically useful mission-system evaluation requires low-resolution (highly abstracted models), but the validity and credibility of those evaluations depends on deeper work and is enhanced by the ability to zoom in on components of the system problem-to explore underlying mechanisms and related capabilities in more detail. Given success in such matters, the remaining challenge is to find reductionist ways in which to display and explain analysis conclusions and motivate decisions. This also requires abstraction, the soundness of which can be enhanced with appropriate tools for data analysis of results from the exploratory work across scenario space.
Defining the next generation of enabling technology for exploratory analysis and multiresolution modeling
Author(s):
Jimmie McEver;
Paul K. Davis
Show Abstract
Multi-resolution/multiple-perspective modeling (MRMPM) is a powerful methodology for developing flexible, adaptive models that can be applied in diverse areas of study. It is useful for addressing the needs of decision makers at different organizational levels, as well as of users who think about phenomenological processes at varied levels of resolution or from different points of view. The desire for MRMPM has many implications for modeling environments. We discuss attractive attributes of modeling environments and how they enable MRMPM.
Mission-systems analysis for rapidly insertable military capabilities
Author(s):
Richard J. Hillestad
Show Abstract
This paper describes an approach to analyzing military capabilities we call mission-systems analysis (MSA) in the context of mobility issues associated with rapid projection of military capabilities. The paper illustrates key aspects of MSA including system modeling, multi-resolution perspectives, mission/objective determination, and exploratory analysis of uncertainties and risk. We show how system definition can be used to help identify key options for analysis and describe two tools, Analytica and DynaRank, that are particularly useful for exploratory analysis when there are broad uncertainties as well as multiple and competing objectives. We believe this approach is particularly useful for military transformation analysis because of the need to develop innovative alternatives and the need to evaluate those alternatives within the many uncertainties characterized by today's military demands across a very large scenario space, amidst competing objectives, and subject to many constraints on the application of force.
Overview of clustering algorithms
Author(s):
Allyn Treshansky;
Robert M. McGraw
Show Abstract
Clustering algorithms are useful whenever one needs to classify an excessive amount of information into a set of manageable and meaningful subsets. Using an analogy from vector analysis, a clustering algorithm can be said to divide up state space into discrete chunks such that each vector lies within one chunk. These vectors can best be thought of as sets of features. A canonical vector for each region of state space is chosen to represent all vectors which are located within that region. The following paper presents a survey of clustering algorithms. It pays particular attention to those algorithms that require the least amount of a priori knowledge about the domain being clustered. In the current work, an algorithm is compelling to the extent that it minimizes any assumptions about the distribution of vectors being classified.
Quantum information theory for model abstraction techniques
Author(s):
Marjorie V. Quant;
Ryan Colburn
Show Abstract
The intent of this paper to bring forward and explore a possible use of the emerging field of quantum information sciences to the modeling and simulation community. It is desired that this research will open new pathways and partnerships where quantum information theory may be applied to modeling techniques. The issue for a mixed resolution model/simulation is how one may provide for the correct data exchange between the differing levels of detail that exist among individual models. The concept behind model abstraction is to extract the essence of a high-resolution model to a level proper for the simulation. There is always a challenge to find a balance between speed of the simulation (lower resolution in the models) and the required level of detail to extract the intended information from the simulation. While computing speed is a quantity, which may be measured somewhat easily, the fitness of the detail extracted from a model is often questionable and subjectively measured. This paper will discuss the use of quantum information theory as a means to quantify differing levels of detail in mixed resolution simulations. An overview of information theory is provided in section 2. This is followed by a brief look at methods used to determine the fidelity of information passed as an abstracted model.
Considerations for selective-fidelity simulation
Author(s):
Bradley C. Schricker;
Stephen A. Schricker;
Robert W. Franceschini
Show Abstract
The concept of fidelity in a simulation model has become one of great contention among simulation researchers. While there is some agreement regarding the definition of fidelity in a model, there appears to be little accord about how the fidelity of a model might be measured - or even whether it is measurable at all. Because of the abstract nature of both reality and the representations of reality used for simulation systems, some argue that fidelity is also abstract in nature and can not be measured. Research conducted at IST, however, has yielded a method not only for measuring fidelity in a model, but also comparing the fidelity of different models to be used in a simulation. This ability to compare the fidelity of different models can be used in a practical manner to decide which models would be most appropriate for a new application. Further research at IST has explored the idea of using this ability to compare the fidelity of models to give a simulation system the ability to select the models that it should use based on outside factors such as the computational load on the system. This concept is called Selective-Fidelity Simulation, and is documented in this paper.
Evaluating the performance versus accuracy tradeoff for abstract models
Author(s):
Robert M. McGraw;
Joseph E. Clark
Show Abstract
While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.
Multisensor fusion effects on the characterization and optimization of TPED architecture performance
Author(s):
James B. Kraiman
Show Abstract
We describe our approach to model the Tasking, Processing, Exploitation, and Dissemination (TPED) process that accounts for multi-sensor fusion while characterizing and optimizing TPED architecture performance across multiple mission objectives. The method would address the inability of current models to assess the valued added by multisensor fusion techniques to ISR mission success, while providing a means to translate detailed output of sensor fusion techniques to higher-level information that is relevant to ISR planning and analysis. The technical approach incorporates treatment of ISR sensor performance, dynamic sensor tasking and multi-sensor fusion within a probability modeling framework to allow rapid evaluation of TPED information throughput and latency. This would permit characterization/optimization of TPED architecture performance against time critical/sensitive targets (TCTs/TSTs), while simultaneously supporting other air-to-ground targeting missions within the Air Tasking Order cycle. TPED architecture performance metrics would include the probability of achieving operational timeliness requirements while providing requisite target identification and localization.
DeLoRes variable resolution modeling implementation
Author(s):
William C. Jorch;
Chester R. Haag;
Ivans Chou;
Bruce Preiss
Show Abstract
Variable Resolution Modeling is a collection of techniques designed to make a higher fidelity model interoperate dynamically with a lower fidelity level model. We illustrate an example of this technique by using a high fidelity Space Based Radar (SBR) sensor model interoperating in real-time with Extended Air Defense Simulation (EADSIM), a US Government standard mission level simulation. The SBR performance is captured in a Commercial-Off-the-Shelf (COTS) database during parametric runs, and analyzed using standard Matlab tools. The functional performance is abstracted using a variety of a multidimensional table look-up methods and neural network representations. Finally, a real-time interactive module has been created to communicate with EADSIM using Distributed Interactive Simulation (DIS) protocols to provide a high fidelity SBR performance model. Although specialized software was created, called DeLoRes, this technique heavily exploits the availability and applicability of several COTS tools and the standardization of inter-application communications. Additionally, this technique leverages existing capabilities in EADSIM.
Multilevel resolution demonstration of the utility of C4ISR
Author(s):
Thomas C. Fall;
Gary A. Plotz
Show Abstract
With the development of new C41SR paradigms like the Joint Battle-space Info-sphere (JBI), the High Level Architecture (HLA) and the integration of large simulations like JWARS- JMASS into a distributed synthetic battle-space the requirement to develop new and innovative mixed resolution modeling techniques is more critical than ever before. Analyzing C4ISR utility ahs historically been an issue because there has not been a clear paradigm like force on force attrition is for a combat event. Rather the effects of C4ISR express themselves by modifying the force on force parameters. But by how much is an area of little consensus. We will discuss an approach, which uses the Dynamic Focusing Architecture (DFA) tool, recently developed for AFRL to help navigate these contentious waters.
Accuracy and stability tradeoffs in multirate simulation
Author(s):
Robert M. Howe
Show Abstract
Many dynamic systems can be separated into identifiable fast and slow subsystems. Computational efficiency in simulating such systems can be improved significantly by using a faster integration frame rate (smaller integration time step) in the numerical simulation of the fast subsystems, compared with the frame rate used for the slow subsystems. In real- time simulations the use of multi-rate simulation may be the only way in which acceptable real-time accuracy can be achieved using a given processor. To convert slow data- sequence outputs form the slow-subsystem simulation to the required fast data sequence inputs for the fast subsystems, extrapolation formulas must be used. Overall simulation accuracy can be improved by using high-order extrapolation formulas. On the other hand, the use of high-order extrapolation can result in numerical instability, especially when the fast and slow subsystems are tightly coupled. Thus there is an accuracy-stability tradeoff in the choice of extrapolation formulas for the slow-to-fast data sequence conversion. In this paper the problem is studied by using fast and slow second-order systems connected in a feedback loop. For various levels of coupling between the two subsystems, dynamic accuracy and numerical stability of the overall simulation is studied. To reduce the number of parameters needed to describe the dual-speed system, the bandwidth of the fast subsystem is assumed to be infinite, with an exact simulation achieved by using an infinite integration frame rate. Numerical stability of the multi-rate simulation is studied by examining the stability boundaries in the λh plane, where λ is the eigenvalue of the overall system and h is the time-step used in numerical simulation of the slow subsystem. For various levels of coupling between fast and slow subsystems, it is shown that the λh-plane stability boundaries shrink when higher-order extrapolation is used for slow-to-fast data sequence conversion. On the other hand, overall simulation accuracy improves with the use of the higher order extrapolation formulas.
Evaluating simulation acceleration techniques
Author(s):
Gregory D. Peterson
Show Abstract
The development of large, complex systems, the training of personnel, and the refinement of concepts of operations all depend on high-performance simulation technologies. For each of the above application areas, among others, there is a chronic need for ever-higher performance. This paper addresses the application of three simulation acceleration approaches, used independently or in conjunction with one another: parallel simulation, mixed-abstraction/multiresolution simulation, and hardware acceleration via reconfigurable computing elements. After discussing the merits of each approach, the paper presents analytic techniques for determining the most effective approach to use for a given simulation problem.
Electromagnetic simulation environment
Author(s):
Eric Jones;
Balaji Krishnapuram;
John Pormann;
John A. Board Jr.;
Lawrence Carin
Show Abstract
We present a general purpose simulator that includes electromagnetic scattering tools for buried targets and standard signal processing functionality. Additional modules for genetic or gradient optimization, parallel processing, and multi-aspect target detection via Hidden Markov Models are also available. The entire library is completely scriptable for customization and web enabled for publishing results on the internet. It is also extensible so that users can add modules that address their specific need. The tool runs on both Unix and Windows platforms and includes graphic modules for plotting results and images as well as three-dimensional visualization capabilities for displaying target meshes, currents, and scattered fields. Electromagnetic scattering is calculated via either the Method of Moments (MoM) for arbitrarily shaped three-dimensional perfectly electric conducting (PECs) or dielectric targets above or embedded within a lossy half space. The code uses the combined-field integral equations with the rigorous half-space dyadic Green's function computed via the method of complex images. The simulator offers coarsely parallel capabilities for distributing individual frequencies across a cluster of workstations with near linear speed-up.
Web-based models and repositories
Author(s):
Paul A. Fishwick
Show Abstract
We describe our research experiences in the study of web- based model repositories, and in model design in general. The web has increasingly become the repository for all knowledge and data, and so it may not be too surprising that model representations will similarly be situated predominately in the web. Our research work began with an Object-Oriented Physical Modeling (OOPM) package, and then progressed to a more immersive, shared, 3D structure using the Virtual Reality Modeling Language (VRML). We arrive at a new way of dynamic modeling in 3D, and have evolved issues and challenges for modeling in the discipline of computer simulation.
Virtual reality modeling language templates for dynamic model construction
Author(s):
Taewoo Kim;
Paul A. Fishwick
Show Abstract
The use of our rube Methodology permits the design of computing models that are framed using user-specified aesthetics. For example, we can design a functional block model using 3D blocks and pipes, or alternatively, using rooms and portals. Our recent approach has been to construct dynamic models using the Virtual Reality Modeling Language (VRML) by taking familiar 2D icons, and subsequently mapping these to 3D primitives. We discuss two specific transformations from 2D to 3D within the context of finite state automata and functional block models. These examples are simple and serve as tutorial examples of how more complex transforms can be accomplished.
Methodology for the 3D modeling and visualization of concurrency networks
Author(s):
Linda K. Dance;
Paul A. Fishwick
Show Abstract
One of the primary formalisms for modeling concurrency and resource contention in systems is Petri nets. The literature on Petri nets is rich with activity on applications as well as extensions. We use the basic Petri net formalism as well as several extensions to demonstrate how metaphor can be applied to yield 3D model worlds. A number of metaphors, including 3D-primitive and landscape are employed within the framework of VRML-enabled simulation. We designed a template for use in creating any Petri net model and then using the template, implemented an example model utilizing metaphors for both the structure and the behaviors of the model. We determined that the result is an effectively and efficiently communicated model with high memory retention. The modeling methodology that we employ was successfully implemented for Petri nets with the ability for model reuse and/or personalization with any aesthetics applied to the desired Petri net.
Distributed collaborative environments for 21st century modeling and simulation
Author(s):
William K. McQuay
Show Abstract
Distributed collaboration is an emerging technology that will significantly change how modeling and simulation is employed in 21st century organizations. Modeling and simulation (M&S) is already an integral part of how many organizations conduct business and, in the future, will continue to spread throughout government and industry enterprises and across many domains from research and development to logistics to training to operations. This paper reviews research that is focusing on the open standards agent-based framework, product and process modeling, structural architecture, and the integration technologies - the glue to integrate the software components. A distributed collaborative environment is the underlying infrastructure that makes communication between diverse simulations and other assets possible and manages the overall flow of a simulation based experiment. The AFRL Collaborative Environment concept will foster a major cultural change in how the acquisition, training, and operational communities employ M&S.
Collaboration forum: an innovation in information exchange for enabling technologies
Author(s):
Mark M. Stephenson
Show Abstract
Collaboration technologies represent a new approach for performing distributed experiments using multiple and disparate simulations. While Collaboration technologies provide potential solutions, there is also a problem in understanding the capabilities of the many technologies in existence, technologies under development, and new concepts. The problem now is keeping up with the rapidly evolving products and concepts within the collaboration community. A new non-commercial web site (www.CollaborationForum.org) has just come on-line to support the collaboration community. This unique web site is designed exclusively by and for the collaboration community. The web site has three primary goals: 1) to be the portal for all information about electronic collaboration. For example, the site includes links to most other related web sites, lists of conferences, and a list of collaboration tools. 2) To promote collaboration as a way of doing business. This web site, will educate industry and the Department of Defense on the tremendous gains that can be realized through proper implementation of electronic collaboration. 3) To support organizations in the development and incorporation of collaboration technology into their enterprises. This may take a variety of forms such as learning from others who have used and commented on a collaboration tool, linking tools vendors with collaboration experts, on-line teaching, and more. The web site itself is a collaboration. Most of the sections within the site itself is a collaboration. Most of the sections within the site offer the user the opportunity to participate by providing their knowledge, experience, and opinions. The site also offers on-line discussion forums where users can discuss a variety of different topics in a free from discussion format. Topics range from getting started in collaboration to advanced collaboration science research. Get on-line and join us in this collaboration of collaborators.
Creation and usage of collaborative workflow templates in distributed simulation
Author(s):
Jerome Reaper
Show Abstract
Collaborative Technologies are an innovative area of endeavor that allows engineering teams to define, integrate and conduct distributed simulation experiments as part of a structured, repeatable process. Workflow techniques can be employed to capture, and frequently automate, internal processes and data flow necessary to answer questions within a wide variety of application domains. Workflow implementations can be constructed in a generalized fashion to provide a working process template to address a focused topic area. These templates define the basic scope and tenants of an experimental domain - as well as any required model sets - and allow extensive exploration within the envelope of that scope. One such template was constructed and used to answer questions postulated by the Global Awareness Virtual Test Bed (GAVTB). The domain for this template involved a study of the effects of information superiority on prosecution of time critical targets (TCT's). This experiment and Workflow template are used as an example case to highlight the approach and application of collaborative techniques in developing Workflow templates addressing multiple levels of Distributed Simulation.
SPEEDES: a brief overview
Author(s):
Christopher A. Bailey;
Robert M. McGraw;
Jeffrey S. Steinman;
Jennifer Wong
Show Abstract
This paper provides an overview of each of the layers contained in the SPEEDES architecture. SPEEDES is a simulation framework that promotes interoperability, portability, efficiency, flexibility, and maintainability for High Performance Computing applications. Specifically, SPEEDES targets parallel and distributed platforms via its advanced time management schemes and shared memory communications structures. SPEEDES currently supports a large user base centered in the DOD simulation community. This paper describes several of the layers and features of the SPEEDES Simulation Framework. In addition, this paper discusses some of the most recent advances to the SPEEDES framework including its Federation Object (FO) System and its support for HLA via the SPEEDES-HLA Gateway.
Automated parametric execution and documentation for large-scale simulations
Author(s):
Robert L. Kelsey;
Keith R. Bisset;
Robert B. Webster
Show Abstract
A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.
MATLAB/Simulink analytic radar modeling environment
Author(s):
Bruce L. Esken;
Brian L. Clayton
Show Abstract
Analytic radar models are simulations based on abstract representations of the radar, the RF environment that radar signals are propagated, and the reflections produced by targets, clutter and multipath. These models have traditionally been developed in FORTRAN and have evolved over the last 20 years into efficient and well-accepted codes. However, current models are limited in two primary areas. First, by the nature of algorithm based analytical models, they can be difficult to understand by non-programmers and equally difficult to modify or extend. Second, there is strong interest in re-using these models to support higher-level weapon system and mission level simulations. To address these issues, a model development approach has been demonstrated which utilizes the MATLAB/Simulink graphical development environment. Because the MATLAB/Simulink environment graphically represents model algorithms - thus providing visibility into the model - algorithms can be easily analyzed and modified by engineers and analysts with limited software skills. In addition, software tools have been created that provide for the automatic code generation of C++ objects. These objects are created with well-defined interfaces enabling them to be used by modeling architectures external to the MATLAB/Simulink environment. The approach utilized is generic and can be extended to other engineering fields.
eSim: a software architecture for Web-enabled simulation
Author(s):
Nazareth S. Bedrossian;
Jiann-Woei Jang;
Joe McManis;
Jeremy Tempelton
Show Abstract
eSim was developed by Draper-Houston to provide a distributed analysis and simulation capability. It is a concept and architecture for Web-enabled simulation. It utilizes a Client/Server construct to Web-enable any simulation. First generation eSim is a general purpose simulation server. With eSim, any simulation can be executed, input parameters entered, and results viewed either in text format, graphically or via animation from a standard Web browser. Additional software and modifications to existing simulations are not required. eSimi provides an interactive capability which allows changing input parameters and viewing corresponding results during the simulation. It also provides the capability to run the simulation in real-time mode. Examples are used to illustrate eSim and eSimi capabilities.
Simulation of on-the-move communications: issues and answers
Author(s):
Thomas C. Fall;
Roger Chase
Show Abstract
Most of the protocols supporting mobile communications systems that have been deployed in the commercial world depend on there being a network of static base stations, linked to a trunk line, such that the user's radio node is always one wireless hop away from a base station. However, in military communications systems, and in some commercial applications, everything can be dynamic - there will not be a base station within one wireless hop of every user. Thus, an important arena of wireless communications applications is evolving to a new kind of network in which trunking will be done through subnets consisting of strings of user's nodes linked together on an ad hoc basis. As the nodes move and some trunk links break, this ad hoc network will re- configure itself and another subnet will be formed to move the trunk traffic. The complexity of these new on-the-move (OTM) networks presents new challenges for modeling and simulation. We explore issues associated with OTM networks in the context of evolving military network requirements. We describe a modeling and analysis approach that modularizes the simulation problem into two layers: the traffic layer and the network management layer. A given traffic scenario generates traffic that is presented to each of the candidate network managers and their performances are compared. This is done for several scenarios that span various dimensions of possible traffic and for each proposed network design solution. The result of this process is a comparative evaluation over the full range of relevant scenarios, which provides the data foundation needed to select the best design approach.
Problem signatures from enhanced vector autoregressive modeling
Author(s):
Bruno R. Andriamanalimanana;
Saumen S. Sengupta
Show Abstract
The work reported in this paper concerns the enhancement of mutivariate autoregressive (AR) models with geometric shape analysis data and stochastic causal relations. The study aims at producing numerical signatures characterizing operating problems, from multivariate time series of data collected in an application and operating environment domain. Since the information content of an AR model does not appear sufficient to characterize observed vector values fully, both geometric and stochastic modeling techniques are applied to refine causal inferences further. The specific application domain used for this study is real-time network traffic monitoring. However, other domains utilizing vector models might benefit as well. A partial Java implementation is being used for experimentation.
Domain-size constraint on real-time model abstractions
Author(s):
Saumen S. Sengupta;
Bruno R. Andriamanalimanana
Show Abstract
Network domain is predicated by the visibility of active nodes from a controller or an observer. The events shaping network factors may affect observation considerably. Accordingly, owing to network congestion and sudden change in resource availability, causal events may lose causal polarity and event bundles may appear slack at the observation post. These nodes are then beyond observation and control. Even though they may appear participating like any other regular nodes, their presence may affect real-time model abstraction processes. Highly dense domains may generate model change points at a faster rate than the observer can process affecting the model abstraction process considerably. In this paper, a framework is explored to articulate the manifold event possibilities that constrain the node visibility, and hence, the domain size. A sketchy optimization model is attempted to realize a limitation of the model abstraction process as a function of hop count.
Time-stepped simulation of queueing systems
Author(s):
Yujing Wu;
Weibo Gong
Show Abstract
This paper discusses some fundamental issues in the time stepped simulation, which is a resolution adjustable simulation methodology. We first analyze the simulation errors for a simple queueing system. Then we study the impact of the source traffic statistics on the simulation performance in a more general setting.
Developing models and simulations from a life-cycle point of view
Author(s):
Wayne Zandbergen;
Millard Barger
Show Abstract
When designing models and simulations, adopting a life-cycle perspective at the outset is the most effective and efficient course of action for both developer and end-user. Experience in software programs for the defense and commercial sectors reveals that change is a constant. This is so especially for the analytic simulations that, if proven useful, will see multiple applications over their lifetimes. The inevitable shifts in analytical focus exert pressure for the tool to adapt if it is to remain useful and relevant. Accommodating this evolution in the past has been difficult and expensive. In most cases, costs after initial operational capability are predominant in the simulation life cycle. This is particularly true for large-scale simulations, as complexity grows exponentially with size. Accommodating this change demands a software design concept that provides flexibility and modularity as well as a software implementation approach that loosely couples components and elements to make adaptation possible and practical. One approach, called the Common Analytical Simulation Architecture (CASA), is examined as a contemporary software design and development paradigm for military M&S that employs best commercial practices to effect in this regard. Simulations constructed in this manner are not only cost-effective in and of themselves, but can extend significant savings to other simulation efforts through the practical re-use of infrastructure. Consequently, it behooves all parties to such programs to recognize the inevitability of change and to employ a design and development paradigm that anticipates and accommodates the need to evolve.
Economics of simulation task force
Author(s):
Steven C. Gordon
Show Abstract
It is both logical and appropriate for decision-makers to ask for ways to judge the value of simulation. Often, the request is even more pointed than just wanting a report on the value of simulation, and specifics on the economics of simulation are requested. Clearly, undertaking to answer questions about the economics of simulation will be critical to building an understanding of how to spend future marginal National Defense dollars. As an example, one can evaluate the economics of simulation where it supports our ability to develop, build, and test new weapon systems. Here, historically derived returns on investment, cost avoidance, cycle time reductions, and lifecycle cost savings have been documented and warrant further investigation. However, there is a larger area of use for simulation where judging its value must go beyond economics. Simulation, in most uses, has a value (or benefit or impact) beyond cost savings, and most efforts to understand the economics of simulation really intend to include the more general topic of the value of simulation. The broader question of the value of simulation will be tackled because simulation must prove its worth. If it is adequately funded and intelligently used, simulation will save valuable national resources and improve readiness. A task force of volunteers is now looking at the economics (benefits, value, impact) of simulation, and this paper seeks to provide an overview of the state of understanding of this topic and solicit volunteers to join this task force effort.
Solving system integration and interoperability problems using a model reference systems engineering framework
Author(s):
Mahmoud A. Makhlouf
Show Abstract
This paper presents a model-reference systems engineering framework, which is applied on a number of ESC projects. This framework provides an architecture-driven system engineering process supported by a tool kit. This kit is built incrementally using an integrated set of commercial and government developed tools. These tools include project management, systems engineering, military worth-analysis and enterprise collaboration tools. Products developed using these tools enable the specification and visualization of an executable model of the integrated system architecture as it evolves from a low fidelity concept into a high fidelity system model. This enables end users of system products, system designers, and decision-makers; to perform what if analyses on system design alternatives before making costly final system acquisition decisions.
Extending simulation-based acquisition (SBA) to the warfighter with the Air Force Joint Synthetic Battlespace (JSB-AF)
Author(s):
Phil Faye;
Emily B. Andrew;
Jayson Lee
Show Abstract
The Air Force has vectored in a new direction to expand its investment in advanced simulation technologies to improve our readiness, lower costs, and dominate the battles of tomorrow. One of the critical initiatives in this direction is the Joint Synthetic Battlespace (JSB). The JSB will provide an integrated Modeling and Simulation (M&S) environment that brings together analysis, training, and simulations into a coherent whole. The ultimate goal is to develop a JSB simulation capability that will provide a new level of realism in synthetic mission and battlespace environments. This capability will be used to evaluate not only specific systems characteristics but also the associated tactics and procedures. The feature that distinguishes the JSB will be the its ability to realistically represent the real-world mission environment and provide the warfighter with real-time feedback on a system's expected performance. This unprecedented level of realism and response will enable the warfighter to evaluate mission effectiveness and conduct course of action analyses. At the same time, the JSB will increase the acquisition community's ability to build or modify systems to meet users' needs and expectations by making the warfighter the focus and direct participant in the acquisition process. As a detailed engineering and trades tool, the JSB will provide a completely scaleable environment for controlled executions of experiments to support analyses that require repeatability and controlled variations in the simulated environment. The government will also be able to ensure the configuration, control, and consistency of the JSB environment, while the users will develop their own plans and scenarios for their analyses. This will provide a standardized synthetic environment for acquisition programs that can be used as either a distributed or stand-alone application.
JIMM: the next step for mission-level models
Author(s):
Jamieson Gump;
Robert G. Kurker;
Joseph P. Nalepka
Show Abstract
The (Simulation Based Acquisition) SBA process is one in which the planning, design, and test of a weapon system or other product is done through the more effective use of modeling and simulation, information technology, and process improvement. This process results in a product that is produced faster, cheaper, and more reliably than its predecessors. Because the SBA process requires realistic and detailed simulation conditions, it was necessary to develop a simulation tool that would provide a simulation environment acceptable for doing SBA analysis. The Joint Integrated Mission Model (JIMM) was created to help define and meet the analysis, test and evaluation, and training requirements of a Department of Defense program utilizing SBA. Through its generic nature of representing simulation entities, its data analysis capability, and its robust configuration management process, JIMM can be used to support a wide range of simulation applications as both a constructive and a virtual simulation tool. JIMM is a Mission Level Model (MLM). A MLM is capable of evaluating the effectiveness and survivability of a composite force of air and space systems executing operational objectives in a specific scenario against an integrated air and space defense system. Because MLMs are useful for assessing a system's performance in a realistic, integrated, threat environment, they are key to implementing the SBA process. JIMM is a merger of the capabilities of one legacy model, the Suppressor MLM, into another, the Simulated Warfare Environment Generator (SWEG) MLM. By creating a more capable MLM, JIMM will not only be a tool to support the SBA initiative, but could also provide the framework for the next generation of MLMs.
Mixed-resolution modeling of perceptions in the joint warfare system
Author(s):
David McNamara
Show Abstract
Faithfully modeling high-level situation awareness is an important aspect of mixed resolution modeling in military simulations. Our work has focused on developing techniques for identifying and tracking aggregate unit perceptions, composed of several sub-components, of land-forces simulated within the Joint Warfare System (JWARS). One of the key features of JWARS is that it separates perception from ground-truth: the information that is provided to one simulated side is derived only from reports generated by the sensor systems. In JWARS, the sensors provide high-resolution information on individual battle-space-entities, but little information on how groups of these sub-units are organized into larger, aggregate units. Our approach utilizes a wide range of distinct computational approaches including: Bayesian statistical analysis, group target tracking, and spatial analysis/clustering. Each of these approaches, within the overarching model, incorporates known doctrinal information on the force-structure and tactics of the simulated force. This aggregation model provides a mechanism for integrating the numerous high-resolution sensor reports in to a consistent lower-resolution description of the simulated land forces in the battlespace.
Use of modern control theory in military command and control
Author(s):
Timothy E. Busch
Show Abstract
This paper discusses the use of modern control theoretic approaches in military command and control. The military enterprise is a highly dynamic and nonlinear environment. The desire on the part of military commanders to operate at faster operational tempos while still maintaining a stable and robust system, naturally leads to the consideration of a control theoretic approach to providing decision aids. I will present a brief history of the science of command and control of military forces and discuss how modern control theory might be applied to air operations.
Modeling and agile control for joint air operation environment
Author(s):
Christos G. Cassandras;
Kagan Gokbayrak;
David A. Castanon;
Jerry M. Wohletz;
Michael L. Curry;
Michael Gates
Show Abstract
A key component of a Joint Air Operation (JAO) environment is the planning and dynamic control of missions in the presence of uncertainties. This involves the assignment of resources (e.g., different aircraft types) to targets while taking into account and anticipating the effect of random future events and, subsequently, dynamic control in response to various controllable and uncontrollable events as missions are executed in a hostile and rapidly changing setting. The objective is to maximize the reward associated with targets while minimizing loss of resources. In this paper, we first formulate the problem of optimal mission assignment and identify the complexities involved due to combinatorial and stochastic characteristiscs. We then describe a discrete event simulation tool developed to model the JAO environment and all of its dynamics and stochastic elements and to provide a testbed for several methods we are developing to solve the problem of agile mission control. We describe some of these methods, including approximate dynamic programming using rollout algorithms and optimal resource allocation schemes, and present some numerical results.
Simulation and modeling for military air operations
Author(s):
Ruth D. Kreichauf;
Saad Bedros;
Yusuf Ateskan;
Joao Hespanha;
Hakan Kizilocak
Show Abstract
The Joint Forces Air Component Commander (JFACC) in military air operations controls the allocation of resources (wings, squadrons, air defense systems, AWACS) to different geographical locations in the theater of operations. The JFACC mission is to define a sequence of tasks for the aerospace systems at each location, and providing feedback control for the execution of these tasks in the presence of uncertainties and a hostile enemy. Honeywell Labs has been developing an innovative method for control of military air operations. The novel model predictive control (MPC) method extends the models and optimization algorithms utilized in traditional model predictive control systems. The enhancements include a tasking controller and, in a joint effort with USC, a probabilistic threat/survival map indicating high threat regions for aircraft and suggesting optimal routes to avoid these regions.
A simulation/modeling environment using object-oriented methodologies has been developed to serve as an aide to demonstrate the value of MPC and facilitate its development. The simulation/modeling environment is based on an open architecture that enables the integration, evaluation, and implementation of different control approaches. The simulation offers a graphical user interface displaying the battlefield, the control performance, and a probability map displaying high threat regions. This paper describes the features of the different control approaches and their integration into the simulation environment.
Predictive models of battle dynamics
Author(s):
Jan Jelinek
Show Abstract
The application of control and game theories to improve battle planning and execution requires models, which allow military strategists and commanders to reliably predict the expected outcomes of various alternatives over a long horizon into the future. We have developed probabilistic battle dynamics models, whose building blocks in the form of Markov chains are derived from the first principles, and applied them successfully in the design of the Model Predictive Task Commander package. This paper introduces basic concepts of our modeling approach and explains the probability distributions needed to compute the transition probabilities of the Markov chains.
Modeling effects-based operations in support of war games
Author(s):
Lee William Wagenhals;
Alexander H. Levis
Show Abstract
The problem of planning, executing and assessing Effects-Based Operations (EBO) requires the synthesis of a number of modeling approaches. A prototype system to assist in developing Courses of Action (COAs) for Effects-Based operations and evaluating them in terms of the probability of achieving the desired effects has been developed and is called CAESAR II/EB. Two of the key components of the system are: (a) an Influence net modeler such as the Campaign Assessment Tool (CAT) developed at AFRL/IF, and (b) an executable model generator and simulator based on the software implementation of Colored Petri nets called Design/CPN. The executable model, names COA/EB, is used to simulate the COAs and collect data on Measures of Performance (MOPs). One particular output is the probability of achieving the desired affect as a function of time. Probability profiles can be compared to determine the more effective COAs. This version of CAESAR II/EB was used successfully in August 2000 at the Naval War College in the war game Global 2000. Experiences with building and using the models both prior to the war game and during the war game to answer tropical questions as they arose are described.
Applying nonlinear multiloop simulation to effects-based operations
Author(s):
Corey L. Lofdahl
Show Abstract
Effects-Based Operations (EBO) consider an adversary nation as a system, which can be decomposed into (1) National Elements of Value (NEVs), (2) target systems, and (3) target sets. This approach to studying EBO is founded on the insight that the effects of an aerospace campaign, both physical and behavioral, can be understood as the dynamic behaviors of a complex system. This study proposes addressing the EBO problem domain in terms of a nonlinear, multi-feedback loop model, which can be analyzed using commercial system dynamics modeling and simulation tools. This study aims to advace targeting policy in three sections covering (1) overview of concepts, (2) next steps, and (3) risks and alternatives.
Automated IPB in support of wargaming and COA analysis and comparison
Author(s):
Charles H. Mitchell;
James K. Williams
Show Abstract
In the area of Modeling and Simulations for Wargaming and Exercise Support, Intelligence Preparation of the Battlespace (IPB) is a flexible process that assists commanders and their staffs in planning and executing campaigns and missions. The Automated Assistance with IPB (A2IPB) tool, being developed for the USAF, enables the leveraging of information about the adversary's capabilities, potential centers of gravity, and possible courses of action (COAs) across all dimensions of the battlespace. Operational staffs heavily depend on IPB products prepared during the analysis of the adversary situation and the evaluation of the battlepace's effects in order to forumlate initial friendly force dispositions and schemes of maneuver. Wargaming is a conscious attempt to visualize the flow of a military operation, given friendly strengths and dispositions, adversary assets, and possible COAs, and a specific battlespace environment. It attempts to foresee the action, reaction, and counteraction dynamics between a pair of friendly and adversary COAs. A2IPB has the potential to allow the commander and his staff to conduct a wargame from within the software. Friendly COAs will be graphically portrayed against the adversary's most likely COA and then against the most dangerous COA. By having all relevent infomration loaded into the same database, conduct of the wargame is facilitated, data modification is easier, and the ability to plan back a particular set of COAs multiple times is provided. Using A2IPB facilities the commander's decision upon a COA believed to be the most advantegeous to the operation. Using the results of wargaming associated with the COA, the staff prpared OPORDs that implement the commander's decision.
Decision investigation and support environment (DISE)
Author(s):
Michael J. VonPlinsky;
Pete Johnson;
Ed Crowder
Show Abstract
The "Decision Integration and Support Environment" (DISE) is a Bayesian network (BN) based modeling and simulation of the target nomination and aircraft tasking decision process. FTI has developed two BNs to model these processes, incorporating aircraft, target, and overall mission priorities from the Air Operations Center (OAC) and the mission planners/command staff.
DISE operates in event driven interactions with FTI's AOC model, being triggered from within the Time Critical Target (TCT) Operations cell. As new target detections are received by the AOC from off-board ISR Sources and processed by the Automatic Target Recognition (ATR) module in the AOC, DISE is called to determine if the target should be prosectued, and if so, which of the available aircraft should be tasked to attack it. A range of decision criteria, with priorities established off-line and input into the tool, are associated with this process, including factors such as:
* Fuel Level - amount of fuel in aircraft
* Type of Weapon - available weapons on board aircraft
* Probability of Survival - depends on the type of TST, time criticality and other factors
* Potential Collateral Damage - amount of damage incurred on TST surroundings
* Time Criticality of TST - how "critical" it is to attack the target depending on its launch status
* Time to Target - aircraft's distance (in minutes) from the TST
* Current Mission Priority - priority of the mission to which the aircraft is currently assigned
* TST Mission Priority - determined when the target is originally nominated
* Possible Reassignment - represents whether it is even possible to reassign the aircraft
* Aircraft Re-tasking Availability - represents any factor not taken into account by the model, including commander override.
Understanding EBO: model abstraction and achieving a favorable endstate
Author(s):
Alan B. Evans
Show Abstract
The planning, execution and assessment of Effects-Based Operations present many fundamental modeling problems. Futhermore, an understanding of adversary reactions and the causal linkages to our actions are critical parts of the problem of understanding whether objectives have been met. This paper will report on some recent work at ALPHATECH aimed at developing technology that will help enable understanding of Effects-Based Operations, and postulate some directions for future research.
Under the DARPA Endstate program, ALPHATECH has been examining the problem of understanding the effects of operations on multiple, complex and highly interconnected networks within a nation-state's infrastructure. The unifying technical concept of much of the Endstate work is model abstraction. Endstate has considered two basic forms of model abstraction in connecting models of different systems. The first is reduced-order modeling, an inductive approach to modeling sub-problems in variable spaces of reduced dimensionality. The second form of model abstraction is deductive, with geospatial or timescale decomposition leading to hierarchies of related models. Model-based abstraction of network facilities and links, together with constraining physics, as well as the priorities and objectives of embedded controllers or coordinators are studied for networks of interest.
Knowledge focus via software agents
Author(s):
Donald E. Henager
Show Abstract
The essence of military Command and Control (C2) is making knowledge intensive decisions in a limited amount of time using uncertain, incorrect, or outdated information. It is essential to provide tools to decision-makers that provide:
* Management of friendly forces by treating the "friendly resources as a system".
* Rapid assessment of effects of military actions againt the "enemy as a system".
* Assessment of how an enemy should, can, and could react to friendly military activities.
Software agents in the form of mission agents, target agents, maintenance agents, and logistics agents can meet this information challenge. The role of each agent is to know all the details about its assigned mission, target, maintenance, or logistics entity. The Mission Agent would fight for mission resources based on the mission priority and analyze the effect that a proposed mission's results would have on the enemy. The Target Agent (TA) communicates with other targets to determine its role in the system of targets. A system of TAs would be able to inform a planner or analyst of the status of a system of targets, the effect of that status, adn the effect of attacks on that system. The system of TAs would also be able to analyze possible enemy reactions to attack by determining ways to minimize the effect of attack, such as rerouting traffic or using deception. The Maintenance Agent would scheudle maintenance events and notify the maintenance unit. The Logistics Agent would manage shipment and delivery of supplies to maintain appropriate levels of weapons, fuel and spare parts.
The central idea underlying this case of software agents is knowledge focus. Software agents are createad automatically to focus their attention on individual real-world entities (e.g., missions, targets) and view the world from that entities perspective. The agent autonomously monitors the entity, identifies problems/opportunities, formulates solutions, and informs the decision-maker. The agent must be able to communicate to receive and disseminate information and provide the decision-maker with assistance via focused knowledge. THe agent must also be able to monitor the state of its own environment and make decisions necessary to carry out its delegated tasks.
Agents bring three elements to the C2 domain that offer to improve decision-making. First, they provide higher-quality feedback and provide it more often. In doing so, the feedback loop becomes nearly continuous, reducing or eliminating delays in situation updates to decision-makers. Working with the most current information possible improves the control process, thus enabling effects based operations. Second, the agents accept delegation of actions and perform those actions following an established process. Agents' consistent actions reduce the variability of human input and stabilize the control process. Third, through the delegation of actions, agents ensure 100 percent consideration of plan details.
Intelligently interactive combat simulation
Author(s):
Lawrence J. Fogel;
Vincent William Porto;
Steven M. Alexander
Show Abstract
To be fully effective, combat simulation must include an intelligently interactive enemy... one that can be calibrated. But human operated combat simulations are uncalibratable, for we learn during the engagement, there's no average enemy, and we cannot replicate their culture/personality. Rule-based combat simulations (expert systems) are not interactive. They do not take advantage of unexpected mistakes, learn, innovate, and reflect the changing mission/situation. And it is presumed that the enemy does not have a copy of the rules, that the available experts are good enough, that they know why they did what they did, that their combat experience provides a sufficient sample and that we know how to combine the rules offered by differing experts. Indeed, expert systems become increasingly complex, costly to develop, and brittle. They have face validity but may be misleading. In contrast, intelligently interactive combat simulation is purpose- driven. Each player is given a well-defined mission, reference to the available weapons/platforms, their dynamics, and the sensed environment. Optimal tactics are discovered online and in real-time by simulating phenotypic evolution in fast time. The initial behaviors are generated randomly or include hints. The process then learns without instruction. The Valuated State Space Approach provides a convenient way to represent any purpose/mission. Evolutionary programming searches the domain of possible tactics in a highly efficient manner. Coupled together, these provide a basis for cruise missile mission planning, and for driving tank warfare simulation. This approach is now being explored to benefit Air Force simulations by a shell that can enhance the original simulation.
LG Strategist: your personal chief of staff
Author(s):
Boris Stilman;
Vladimir Yakhnis
Show Abstract
LG STRATEGIST is an advanced software package based on the new type of game theory called Linguistic Geometry. This theory allows us to generate best war-gaming strategies in real time. Armed with LG STRATEGIST, a commander would obtain hands-on capability to plan missions, respond immediately to crisis, run what-if analysis, and monitor the execution of operations at all levels. With little experience, a commander could turn LG STRATEGIST into a friendly tactical/operational advisor or a devil advocate, into his/her personal chief of staff.
Active objects programming for military autonomous mobile robots software prototyping
Author(s):
Roger F. Cozien
Show Abstract
While designing mobile robots, we do think that the prototyping phase is really critical. Good and clever choices have to be made. Indeed, we may not easily upgrade such robots, and most of all, when the robot is on its own, any change in both the software and the physical body is going to be very difficult, if not impossible. Thus, a great effort has to be made when prototyping the robot. Furthermore, I think that the kind of programming is very important. If your programming model is not expressive enough, you may experience a great deal of difficulties to add all the features you want, in order to give your robot reactiveness and decision making autonomy. Moreover, designing, and prototyping the on-board software of a reactive robot brings other difficulties. A reactive robot does not include any matter of rapidity. A reactive system is a system able to respond to a huge pannel of situations of which it does not have the schedule. In other words, for instance, the robot does not know when a particular situation may occur, and overall, what it would be doing at this time, and what would be its internal state. This kind of robot must be able to take a decision and to act even if they do not have all the contextual information. To do so, we use a computer language named oRis featuring object and active object oriented programming, but also parallel and dynamic code, (the code can be changed during its own execution). This last point has been made possible because oRis is fully interpreted. However oRis may call fully compiled code, but also Prolog and Java code. An oRis program may be distributed on several computers using TCP/IP network connections. The main issue in this paper is to show how active objet oriented programming, as a modern extension of object oriented programming, may help us in designing autonomous mobile robots. Based on a fully parallel software programming, an active object code allows us to give many features to a robot, and to easily solve tasks conflicts. Active object oriented programming is also very useful in matter of software engineering. Indeed, inside the code, the separation between the logical parts is explicit and plain. So it allows the designer to take only the robot's logical software part, regardless of the software testing environment, and to put it on the physical robot. And even among the logical parts of the robot software, the separation is quite huge, which is a good thing in terms of code engineering, upgrading and reusing. This kind of approach is, or should be, imposed by the particular constraints that lie on military robots, and on any kind of autonomous systems acting in hostile environments, if not in really unknown environments. This systems have to lead a mission on which other systems, and even human lifes, rely on. That is the reason why we want to have an accurate look on the on-board software which ensures the robot's autonomy of decision.
Assessing the effectiveness of Defensive Aids Suite technology
Author(s):
John L. Rapanotti;
Annie DeMontigny-Leboeuf;
Marc Palmarini;
Andre Cantin
Show Abstract
Modern anti-tank weapons and the requirement of rapid deployment have significantly reduced the quantity and effectiveness of passive armor in protecting land vehicles. This new development has led to replacing the main battle tank by a light armored vehicle with at least the same level of survivability achievable by advances in sensor, computer and countermeasure technology to detect, identify and defeat potential threats. The integration of various technologies into a Defensive Aids Suite (DAS) can be designed and analyzed by combining field trials and laboratory data with modeling and simulation. This complementary approach will also make an optimal use of available resources and encourage collaboration with other researchers working towards a common goal. This modeling capability can be easily transferred to other researchers in the field by using a quick prototyping environment such as MATLAB. The code generated from MATLAB will be used for further analysis in an operational research simulator such as ModSAF. Once calibrated with a previous trial, ModSAF will be used to plan future trials. An important feature of ModSAF is the use of scripted input files to plan and implement a fixed battle based on accepted doctrine and tactics. Survivability of a DAS-equipped vehicle can be assessed relative to a basic vehicle without a DAS. In later stages, more complete DAS systems will be analyzed to determine the optimum configuration of the DAS components and the effectiveness of a DAS-equipped vehicle for a particular mission. These concepts and approach will be discussed in the paper.
Integration of a V&V smart munition model into OneSAF testbed baseline for simulation and training
Author(s):
Jerrell R. Ballard Jr.
Show Abstract
This paper describes the integration of a verified and validated (V&V) smart munition model for the Army's Hornet sublet into OneSAF Testbed Baseline for equipment performance simulation, testing, and training. This effort improves realism of current Hornet behavior in the Testbed by implementing sublet fly-out to model the effects of target type, speed, and environmental conditions on target acquisition. Also addressed are issues of maintaining a V&V of the model and at the same time reducing fidelity of the model to obtain real-time simulation of the sublet fly-out and target acquisition.
Modeling and simulation tools for high-energy laser safety applications
Author(s):
Peter Alan Smith;
David A. Van Veldhuizen;
Kenneth S. Keppler
Show Abstract
As laser systems with increasingly higher energy are developed for military applications, the difficulty of the laser safety problem increases proportionally. The hazard distance for the direct beam can be in the order of thousands of miles, and the potential exists for radiation reflected from the target to be hazardous over long distances. It then becomes impractical to contain the laser beam within the confines of test ranges. This, together with a rapidly changing environment with fast moving laser source and target, has led to the requirement for an integrated modeling and simulation tool to perform the complex series of calculations which are needed to ensure the safe testing and use of these lasers outdoors. The Laser Range Safety Tool (LRST) is being developed to meet this requirement. LRST predicts laser intensity distributions in the physical space surrounding an illuminated target and provides a highly interactive and graphical visualization of the scenario geometry, scattered intensity, and potential hazard zones. A comprehensive validation and verification program involving measurements during high-energy laser range tests is also being undertaken. The inclusion of models for stochastic processes such as aiming and tracking errors, atmospheric turbulence, and risk criteria is also being considered to extend the tools to provide quantitative risk assessment data to support risk management decisions. Finally, as high-energy laser programs move out of the testing phase to training and operational deployment, tools such as these can be enhanced and integrated into real time mission planning and commanders' decision aids.
Role of premission testing in the National Missile Defense system
Author(s):
Janice V. Tillman;
Beverly Atkinson
Show Abstract
The purpose of the National Missile Defense (NMD) system is to provide detection, discrimination, engagement, interception, and negation of ballistic missile attacks targeted at the United States (U.S.), including Alaska and Hawaii. This capability is achieved through the integration of weapons, sensors, and a battle management, command, control and communications (BMC3) system. The NMD mission includes surveillance, warning, cueing, and engagement of threat objects prior to potential impact on U.S. targets. The NMD Acquisition Strategy encompasses an integrated test program using Integrated Ground Tests (IGTs), Integrated Flight Tests (IFTs), Risk Reduction Flights (RRFs), Pre Mission Tests (PMTs), Command and Control (C2) Simulations, and other Specialty Tests. The IGTs utilize software-in-the-loop/hardware-in-the-loop (SWIL / HWIL) and digital simulations. The IFTs are conducted with targets launched from Vandenberg Air Force Base (VAFB) and interceptors launched from Kwajalein Missile Range (KMR). The RRFs evaluate NMD BMC3 and NMD sensor functional performance and integration by leveraging planned Peacekeeper and Minuteman III operational test flights and other opportunities without employing the NMD interceptor. The PMTs are nondestructive System-level tests representing the use of NMD Element Test Assets in their IFT configuration and are conducted to reduce risks in achieving the IFT objectives. Specifically, PMTs are used to reduce integration, interface, and performance risks associated with Flight Tests to ensure that as much as possible, the System is tested without expending a target or an interceptor. This paper examines several critical test planning and analysis functions as they relate to the NMD Integrated Flight Test program and, in particular, to Pre-Mission Testing. Topics to be discussed include: - Flight-test program planning; - Pre-Test Integration activities; and - Test Execution, Analysis, and Post-Flight Reconstruction.
Aerospace Toolbox---a flight vehicle design, analysis, simulation ,and software development environment: I. An introduction and tutorial
Author(s):
Paul M. Christian;
Randy Wells
Show Abstract
This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provides a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed include its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics to be covered in this part include flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this paper, to be published at a later date, will conclude with a description of how the Aerospace Toolbox is an integral part of developing embedded code directly from the simulation models by using the Mathworks Real Time Workshop and optimization tools. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).
ARTEMIS: a high-fidelity NTW system simulation
Author(s):
Ann F. Pollack;
Andreas K. Chrysostomou
Show Abstract
The Navy Theator Wide (NTW) Program is in the concept design stage. As the NTW Mission Technical Direction Agent, the Johns Hopkins University Applied Physics Laboratory (JHU/APL) is responsible for independent evaluation of system design concepts and technical approaches. To support this capability, JHU/APL has developed an integrated end-to-end simulation. The APL Area/Theater Engagement Missile-Ship Simulation - Theater Version (ARTEMIS-T) is built upon existing high-fidelity simulations of the NTW system components interfaced using the distributed High Level Architecture (HLA) developed by the Defense Modeling and Simulation Office (DMSO). Integration of these high-fidelity component simulations allows dynamic modeling of the closed-loop interactions crucial to an overall system understanding. ARTEMIS provides a tool for use throughout the program life-cycle, from requirements definition and design verification to flight test performance prediction to evaluation of new algorithms and technologies within a complete system setting.
Family of systems simulation (FOSSIM): a collaborative approach to FoS modeling and simulation
Author(s):
Debra Wymer;
Ray Washburn;
Phil Colvert;
Dave Cunefare
Show Abstract
Significant advances are being made in the application of Modeling and Simulation (M&S) technologies to support Government initiatives for Simulation Based Acquisition (SBA) and Simulation, Test and Evaluation Process (STEP) in the Army transformation. This paper describes a collaborative approach to Family of Systems (FoS) M&S in use on the FOSSIM program to evaluate critical Theater Air and Missile Defense (TAMD) integration and interoperability issues and to explore opportunities for advanced technology exploitation. This paper provides an overview of the FOSSIM concept and describes the collaborative development and utilization methodology. Key topics discussed include Systems Engineering and Design, Model Engineering and Development, Systems Analysis, and Configuration Management (CM). The paper concludes by summarizing the FOSSIM simulation environment and modeling capabilities.