Proceedings Volume 3696

Enabling Technology for Simulation Science III

cover
Proceedings Volume 3696

Enabling Technology for Simulation Science III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 22 June 1999
Contents: 7 Sessions, 33 Papers, 0 Presentations
Conference: AeroSense '99 1999
Volume Number: 3696

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Topics in Distributed and Web-enabled Modeling and Simulation
  • Model Abstraction and Mixed-Resolution Modeling
  • DEVS Concepts and Applications
  • Modeling and Simulation for Complex Systems
  • Modeling and Simulation Visualization
  • Modeling and Simulation Methodologies and Frameworks
  • Abstraction Techniques in Phenomenological Modeling
  • Model Abstraction and Mixed-Resolution Modeling
  • Modeling and Simulation Methodologies and Frameworks
Topics in Distributed and Web-enabled Modeling and Simulation
icon_mobile_dropdown
Modeling the simulation execution process with digital objects
Robert M. Cubert, Paul A. Fishwick
Object Oriented Physical Modeling (OOPM), formerly known as MOOSE, and its implementation of behavior multimodels provide an ability to manage arbitrarily complex patterns of behavioral abstraction in web-friendly simulation modeling. In an OOPM mode, one object stands as surrogate for another object, and these surrogates cognitively map to the real world. This `physical object' principle mitigates impact of incomplete knowledge and ambiguity because its real-world metaphors enable model authors to draw on intuition, facilitating reuse and integration, as well as consistency in collaborative efforts. A 3D interface for modeling and simulation visualization, under construction to augment the existing 2D GUI, obeys the physical object principle, providing a means to create, change, reuse, and integrate digital worlds made of digital objects. Implementation includes Distributed Simulation Executive, Digital object MultiModel Language, Digital Object Warehouse, and multimodel Translator. This approach is powerful and its capabilities have steadily grown; however, it has lacked a formal basis which we now provide: we define multimodels, represent digital objects as multimodels, transform multimodels to simulations, demonstrate the correctness of execution sequence of the simulations, and closure under coupling of digital objects. These theoretical results complement and enhance the practical aspects of physical multimodeling.
Collaborative enterprise and virtual prototyping (CEVP): a product-centric approach to distributed simulation
Vance M. Saunders
The downsizing of the Department of Defense (DoD) and the associated reduction in budgets has re-emphasized the need for commonality, reuse, and standards with respect to the way DoD does business. DoD has implemented significant changes in how it buys weapon systems. The new emphasis is on concurrent engineering with Integrated Product and Process Development and collaboration with Integrated Product Teams. The new DoD vision includes Simulation Based Acquisition (SBA), a process supported by robust, collaborative use of simulation technology that is integrated across acquisition phases and programs. This paper discusses the Air Force Research Laboratory's efforts to use Modeling and Simulation (M&S) resources within a Collaborative Enterprise Environment to support SBA and other Collaborative Enterprise and Virtual Prototyping (CEVP) applications. The paper will discuss four technology areas: (1) a Processing Ontology that defines a hierarchically nested set of collaboration contexts needed to organize and support multi-disciplinary collaboration using M&S, (2) a partial taxonomy of intelligent agents needed to manage different M&S resource contributions to advancing the state of product development, (3) an agent- based process for interfacing disparate M&S resources into a CEVP framework, and (4) a Model-View-Control based approach to defining `a new way of doing business' for users of CEVP frameworks/systems.
Object-oriented framework for distributed simulation
Julia Hunter, John A. Carson, Martin Colley, et al.
The benefits of object-oriented technology are widely recognized in software engineering. This paper describes the use of the object-oriented paradigm to create distributed simulations. The University of Essex Robotics and Intelligent Machines group has been carrying out research into distributed vehicle simulation since 1992. Part of this research has focused on the development of simulation systems to assist in the design of robotic vehicles. This paper describes the evolution of these systems, from an early toolkit used for teaching robotics to recent work on using simulation as a design tool in the creation of a new generation of unmanned underwater vehicles. It outlines experiences gained in using PVM, and ongoing research into the use of the emerging High Level Architecture as the basis for these frameworks. The paper concludes with the perceived benefits of adopting object-oriented methodologies as the basis for simulation frameworks.
Distributed web-based simulation for protecting intellectual property
Darryl Dieckman, Dale E. Martin, Lantz Moore, et al.
This paper describes our work to develop the technology necessary for vendors to safely distribute and protect their Intellectual Property (IP) on the web. The basic strategy is to provide non-proprietary interface definitions on the web and to retain detailed product performance data behind protected firewalls. The public interface definitions enable interconnection with other regions of a virtual prototype and distributed analysis capabilities allow an evaluation of the fully operational virtual prototype. Thus distributed simulation across the web becomes feasible, and concerns about the security of proprietary data are addressed. IP is protected behind a firewall and distributed simulation is performed with the IP data wholly contained behind the firewall--only simulation interface data denoting changes to interface data values is exchanged over a potentially public network.
Closed-loop real-time testing of avionics systems using distributed interactive simulation technology
John M. Woodyard, Johnny Jones, Douglas C. Reif
The Avionics Wind Tunnel SBIR is a project sponsored by the Air Force Research Laboratory at Wright Patterson with Amherst Systems Inc. The goal was to perform a closed-loop test, stimulating an operational avionics component, capturing its reaction, and injecting it back into the threat simulation. The project integrated several existing and new technologies to evaluate cost-effective methods for performing avionics system evaluation. Distributed Interactive Simulation technology was used to interface the system under test (SUT), the ownship platform and the threat environment simulation. The threat environment was provided by two sets of models. One set was designed to serve as a campaign level simulation and provided background signals. The second set of models was considered `medium' fidelity and provided the primary input to the SUT. The data from the two sets of models and data from the ownship was blended and fed into the stimulation equipment that provided RF signals for the SUT. The reaction of the SUT to the stimulation was digitally monitored on a MIL-STD-1553B bus and fed back to the threat models that modified their action.
Model Abstraction and Mixed-Resolution Modeling
icon_mobile_dropdown
Multimodeling methodology for real-time simulation
Kangsun Lee, Paul A. Fishwick
Real-time systems differ from traditional data processing systems in that they are constrained by certain nonfunctional requirements (e.g., dependability and timing). Although real-time systems can be modeled using the standard structured design methods, these methods lack explicit support for expressing the real-time constraints. Our objective is to present a modeling methodology in which the real-time systems can be modeled efficiently to meet the given simulation objective and the model's time requirements. We developed a modeling methodology that functional requirements of real-time systems are captured with multiple levels of abstraction. Our approach to guaranteeing timing constraints is to vary the level of abstraction so that the simulation can deliver the desired results within the given amount of time. Two selection approaches have been developed to determine the optimal abstraction level that achieves the best tradeoff model quality for time: (1) IP (Integer Programming)-based approach and (2) Search-based approach. A more detailed model (low abstraction level) is selected when we have ample time, while a less detailed model (high abstraction level) is used when there is an imminent time constraint. One of the contributions of our research is that with the ability to select an optimal model for a given deadline, we provide a way to handle real-time constraints for the simulation group. Also, the determined level of abstraction provides the perspective which allows modelers to configure less important components of the system for a given time- constraint situation.
Application of system identification techniques to simulation model abstraction
This paper describes preliminary research into the applicability of system identification techniques to simulation model abstraction. Model abstraction enables the construction of a valid, low-resolution surrogate to a more detailed, high-resolution simulation model. When rapid, approximate results will suffice, we can also apply system identification directly to actual system data, bypassing the simulation stage. Four non-traditional system identification techniques are discussed in relation to their ability to produce linear, time-invariant, state-space formulations of multivariable random systems. A simple example is provided in which one of the techniques, Hidden Markov Models, is used to identify the transition probabilities within a simulated Markov Chain. The example is used to illustrate the challenges in general simulation model abstraction caused by model transformation procedures, problem size, uncertainty, and computational complexity. At this stage, we can say that the application of systems identification to simulation model abstraction is promising, yet challenging.
Benchmark problem for model abstraction techniques
Gary A. Plotz, Thomas A. Karle
Modeling of real systems relies on the arduous task of describing the physical phenomena in terms of mathematical models, which often require excessive amounts of computation time in their execution. In the last few years there has been a growing acceptance of model abstraction whose emphasis rests on the development of more manageable models. Abstraction refers to the intelligent capture of the essence of the behavior of a model, without all the details. In the past, metamodels have been generated from complex models, such as the Tactical Electronic Reconnaissance Simulation Model (TERSM). The scope of this paper is to explore the ability of previously developed TERSM metamodels to accurately simulate the benchmark model using both limited subsets of the original data, and data subsets whose values are interpolated or extrapolated from the original data set used to generate and fit the model. This paper establishes a baseline from which additional metamodels can be compared and analyzed.
Mixed-resolution modeling of backplane-based systems
Robert M. McGraw, Richard A. MacDonald
Mixed-resolution modeling methods are used for developing large complex system models from subsystem models, where each subsystem model may be described at varying levels of detail and complexity. The usefulness of creating mixed- resolution system models is that existing validated or legacy component models can be integrated into the overall system models, regardless of the level of detail. This process eliminates the need for creating or validating additional models of a system at a single specific level of detail. Mixed-resolution modeling methods are being utilized in the development of mission, campaign and theater level models. However, mixed resolution modeling techniques are being successfully employed at lower levels of the modeling spectrum, particularly in the area of hardware/software design. This paper presents mixed resolution-modeling techniques as they apply to backplane based computing systems. The mixed-resolution modeling techniques presented are used to create interface wrappers that handle information and timing differences between dataflow and functional modeling paradigms. The solutions required to resolve these information and timing differences in the engineering domain are similar to the problems found at the theater, campaign, and mission levels.
Pattern-vector-based reduction of 3D meshes created from multimodal data sets
Christopher S. Gourley, Christophe Dumont, Mongi A. Abidi
Recent interest in photo-realistic 3D modeling has appeared in a variety of areas. These models are extremely useful in a realistic simulation environment. When viewing these large models, however, the frame-rate of even the fastest computers can be extremely low. Therefore, methods to lower the resolution of the model, and thus increase the frame- rate, while retaining the realism are desired. By using a multi-resolution model, low-resolution representations can be used while interacting with or moving the model, and the details from the higher resolution representation added when the model is stationary. To aid in the creation of such a model, we have developed a new triangle mesh reduction technique based on pattern vectors which creates a representation of a model. The reduction method presented here is based on the edge collapse/vertex split concept since this method lends itself easily to the multi- resolution mesh concept. The uniqueness of this reduction method, however, comes in the form of the data representation. We not only use the position of the vertices, but also other geometric properties such as normal and curvature. In addition, we have the advantage of having even more information about the mesh than just structural information.
DEVS Concepts and Applications
icon_mobile_dropdown
Collaborative modeling: the missing piece of distributed simulation
Hessam S. Sarjoughian, Bernard P. Zeigler
The Department of Defense overarching goal of performing distributed simulation by overcoming geographic and time constraints has brought the problem of distributed modeling to the forefront. The High Level Architecture standard is primarily intended for simulation interoperability. However, as indicated, the existence of a distributed modeling infrastructure plays a fundamental and central role in supporting the development of distributed simulations. In this paper, we describe some fundamental distributed modeling concepts and their implications for constructing successful distributed simulations. In addition, we discuss the Collaborative DEVS Modeling environment that has been devised to enable graphically dispersed modelers to collaborate and synthesize modular and hierarchical models. We provide an actual example of the use of Collaborative DEVS Modeler in application to a project involving corporate partners developing an HLA-compliant distributed simulation exercise.
SEAE-SES enterprise alternative evaluator: design and implementation of a manufacturing enterprise alternative evaluation tool
Jerry M. Couretas, Bernard P. Zeigler, George V Mignon
Enterprise modeling is presently devoid of a framework for simultaneously combining the strategic, tactical, and financial enterprise design considerations along the lines of a common organization function hierarchical decomposition. Present approaches provide a design structure, usually a combination of prescriptive and descriptive methods, to achieve the required representation. The System Entity Structure Enterprise Alternative Evaluator, while adhering to the broader focus of developing a design environment for manufacturing system configurations, will provide both the categorical qualitative and quantitative descriptive attributes of evaluation objectives. This will be exemplified in the design and evaluation of a single product process.
Distributed object computing: DEVS-based modeling and simulation
Daryl Hild, Hessam S. Sarjoughian, Bernard P. Zeigler
This research examines an approach to modeling and simulating distributed object computer systems in terms of distributed software components mapped onto a set of interconnected network nodes. The overall model of a distributed object computing system has clearly separated hardware and software components enabling co-design engineering. The software component modules form a distributed cooperative object (DCO) model to represent interacting software objects. The hardware component models forma loosely coupled network (LCN) model of processing nodes, network gates, and communication links interconnecting them. The software objects of the DCO are then `distributed' across the processors of the LCN to form a distributed object computing system model. This approach facilitates design analysis of each of these components separately as well as the combined systems behavior. The Discrete Event System Specification formalism is used to implement dynamic models of the DCO components, LCN components, and experimental frames to analyze system behavior.
Modeling and Simulation for Complex Systems
icon_mobile_dropdown
Modeling and simulation-based solutions for complex resource allocation problems
Christos G. Cassandras, Kagan Gokbayrak
Resource allocation problems arise in application domains such as logistics, networking, manufacturing, and C4I systems. The discrete nature of resources to be allocated makes such problems combinatorially complex. In addition, uncertainties in the times when resources are requested and relinquished introduce additional complexities often necessitating the use of simulation for modeling and analysis purposes. In this paper, we present two approaches for solving such problems, the first based on ordinal optimization and the second on the idea of replacing the original discrete allocation problem by a `surrogate model' involving a continuous allocation problem. The latter is simpler to solve through gradient-based techniques and can be shown to recover the solutions of the original problem. Concurrent simulation is used to estimate the gradients required in this approach, leading to extremely fast solutions for many problems in practice.
Convergence and choice of comparison schemes for discrete optimization using statistical tests
Patrick A. Kelly, Weibo Gong, Wengang Zhai
Consider a discrete optimization problem where the objective function is the mean of a random variable and only samples of the random variable are available. A fundamental issue in such a problem is how to compare objective functions through the samples. Ideally, the chosen comparison scheme should lead to an algorithm whose output converges rapidly to the optimum value. In this paper we give some general conditions for convergence and then consider several algorithms having different comparison schemes.
Modeling and analysis of Internet differentiated-services traffic
Muckai K. Girish, Jian Qiang Hu
It is widely accepted that the next major requirement and milestone in the Internet is the ability to provide qualities of service to various applications. The Internet community and the standards bodies have been working on myriad schemes to achieve this objective. One of the prominent and promising approaches is being defined and developed at the Differentiated Services Working Group in IETF. We model the traffic associated with Assured Forwarding Per-Hop Behavior in differentiated services enabled IP networks in this paper. Furthermore, we analyze the effect of such traffic on network resources with the objective of developing efficient traffic engineering methodologies. We also formulate the optimization problem relating to traffic engineering in DS networks with an MPLS core.
Optimizing large complex systems by simulation: a practical approach
Xiaocang Lin
Simulation is the only general tool for analyzing performance of large complex systems. However, blindly using simulation to locate the optimal configuration of a complex systems is an impossible task for most real life problems because of two major difficulties: (1) Adequate accuracy of results requires long simulation time; (2) The number of possible configuration is too large for exhaustive search. In this paper, we will show that by appropriately using shorter simulation and applying ordinal optimization techniques we can more effectively resolve those issues. In the case study, we will apply our technique to a resource allocation problem in a manufacturing plant.
Modeling and Simulation Visualization
icon_mobile_dropdown
Using perspective to model complex processes
Robert L. Kelsey, Keith R. Bisset
The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and plasma changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model.
Advanced displays for modeling and simulation
Jason A. Moore
Is the Java 3D API able to support the visualization requirements of a complex Modeling and Simulation environment for the next generation of Collaborative Command and Control applications? As a simple test of its functionality, a demonstration application will be programmed to create a terrain, populate it with a number of primitive objects (radar, airplanes, etc..), and allow the user to navigate through the terrain.
Advanced simulation tools to model and analyze operational environments
Douglas J. Claffey
Advances in high-level architectures in both hardware and software now allow 3D software modeling and interactive simulation to be done from the desktop computer. This paper will address the increasing demand for 3D software modeling and simulation applications throughout the aerospace industry, what kind of tools are currently available, how operational data is being used in real-world applications, and how to couple real-time data with terrain models and simulation tools to model and analyze operational environments.
Modeling and Simulation Methodologies and Frameworks
icon_mobile_dropdown
Migration protocols for computer-controlled simulation entities in interactive training environments
William F. Foss, Mostafa A. Bassiouni, Ron Hofer, et al.
The run-time infrastructure of HLA supports class-based subscription and routing spaces as a mechanism for reducing bandwidth requirements and eliminating the transmission of irrelevant state update messages. HLA routing spaces enable the RTI to deliver to each federate only the relevant data that it needs to receive. The effectiveness of this approach can be improved by means of clustering, i.e., by the careful assignment of the simulated entities in the virtual world to the different federates. Performance results are presented showing that the filtering gain of a good initial clustering is generally significant. However, the degree of bandwidth reduction due to clustering usually deteriorates over time as vehicles drift away from their initial positions. To reduce this deterioration, we propose a migration protocol for computer-controlled entities. This is done by extending the HLA Federate Interface Definition in order to add the capability of CGF vehicle migration through object ownership transfer. The paper presents the basic features of a CGF migration protocol which dynamically assigns CGF entities to the federates that would minimize network traffic. The different detection mechanisms that can be used to trigger the migration of CGF entities are presented. The insight gained from our work and the challenges still facing the design of large scale HLA training exercises are discussed.
Qualitative modeling of complex systems for cognitive engineering
Karl A. Perusich, Michael D. McNeese, Joan R. Rentsch
In this paper, the techniques of using fuzzy cognitive maps will be outlined, and demonstrated with an example. Fuzzy cognitive maps will be used as a way to model the causal process in a cognitive system. With such a model interventions to change the dynamics of the system can be developed. In the particular example, the information on a display needed to be improved to support group situation awareness within an AWACS team. A fuzzy cognitive map was developed of the chain of causality that led from the current information structure of the AWACS display to the loss of situation awareness. The map could then be examined to identify ways in which the linkages could be altered to improve situation awareness, and points at which technology could be applied. From this a set of design changes could be recommended.
Goal-orientated computational steering
William R. Smith, Wendell L. Anderson, Michael I. Haftel, et al.
Computational steering is a newly evolving paradigm for working with simulation models. It entails integration of model execution, observation and input data manipulation carried out concurrently in pursuit of rapid insight and goal achievement. Keys to effective computational steering include advanced visualization, high performance processing and intuitive user control. The Naval Research Laboratory (NRL) has been integrating facilities in its Virtual Reality Lab and High Performance Computing Center for application of computational steering to study effects of electromagnetic wave interactions using the HASP (High Accuracy Scattering and Propagation) modeling technique developed at NRL. We are also investigating automated inverse steering which involves incorporation of global optimization techniques to assist the user with tuning of parameter values to produce desired behaviors in complex models.
Evolving tactics using levels of intelligence in computer-generated forces
Vincent William Porto, Michael Hardt, David B. Fogel, et al.
Simulated evolution on a computer can provide a means for generating appropriate tactics in real-time combat scenarios. Individual unit or higher-level organizations, such as tanks and platoons, can use evolutionary computation to adapt to the current and projected situations. We briefly review current knowledge in evolutionary algorithms and offer an example of applying these techniques to generate adaptive behavior in a platoon-level engagement of tanks where the mission of one platoon is changed on-the-fly. We also study the effects of increasing the intelligence of one side in a one-on-one tank engagement. The results indicate that measured performance increases with increased intelligence; however, this does not always come at the expense of the opposing side.
Expert assistant for simulation assembly and experimentation (EASAE)
Brett N. Gossage
The assembly and use of complex simulation models within a complete simulation environment remains a challenge to simulation developers and users. Many simulation models have complex interfaces and operational requirements that overwhelm all but the most expert of users. In many instances, the use of these models is often limited to their original developers. Meeting the challenge posed by increasing model complexity requires the incorporation of `meta knowledge' about the models and how they can be assembled into working systems. This requires an expert system that captures operational knowledge from the model domain experts and makes it available to simulation users. A simulation framework based on a consistent paradigm can provide information about models (parameters and attributes) that can be input to the expert system. This information can in turn activate rules within the knowledge base. The effect of rule `firings' would be to notify the user of potential problems or errors in the simulation and provide corrective guidance. This paper describes the development of a prototype Expert Assistant for Simulation Assembly and Experimentation, EASAE (pronounced `easy') that addresses this problem.
Methodology for using intelligent agents to apply simulation technologies to the mission operational environment
John R. Surdu, Udo W. Pooch
This paper proposes a methodology for applying simulation technology to the monitoring and control of operations, focusing on the roles of intelligent agents. These agents monitor the course of the real operation and compare it with a near-real-time simulation of that operation. When these agents detect significant differences between the planned operation and the real operation they explore the discrepancies to determine if they impact on the desired outcome of the operation. When necessary, these agents launch additional simulations or other tools to make this determination. In cases in which the success of the plan is threatened, the agents advise the decision-maker. This paper focuses on the interactions between the agents and how they collaborate with each other in a hierarchy.
Abstraction Techniques in Phenomenological Modeling
icon_mobile_dropdown
Real-time modeling of primitive environments through wavelet sensors and Hebbian learning
James M. Vaccaro, Paul S. Yaworsky
Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.
Phenomenological modeling and simulation in support of model abstraction: computational electromagnetics--an overview
Donald R. Pflug
Successful modeling and simulation of Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) systems requires not only the inclusion of the relevant C4ISR processes involved in the problem under study but also inclusion of the physics behind some of the simulation objects. The electromagnetic performance of the various sensors present in the simulation and the electromagnetic effects caused by the platforms on which the sensors reside are examples. The physical science necessary to simulate such physical effects is called Computational Electromagnetics (CEM) and can provide high quality performance data capable of enhancing the accuracy and value of C4ISR simulations. Unfortunately the simulation of these effects from first principles cannot be done in even near real time. Model Abstraction needs to be applied to such phenomenological simulations to extract out data of the required accuracy in reasonable simulation time. This paper reviews state-of-the-art CEM with a view towards stimulating the development of those Model Abstraction techniques required to incorporate CEM phenomenology into the C4ISR modeling and simulation world.
Model-based parameter estimation and related methods as model abstraction techniques in computational electromagnetics
Donald R. Pflug, Thomas A. Karle
It is demonstrated that the technique of Model Based Parameter Estimation, specifically Cauchy's Method, can be used in the frequency domain to extrapolate/interpolate a narrowband set of system data or information to a broadband set of data or information. The information can be either computed or measured data over a frequency band. For computed data the sampled values of the function and a few derivative values can be used to reconstruct the function. For measured data only sampled values of the function are used as derivative values are too noisy. Cauchy's method is based on applying the principle of analytic continuation to a complex, hard to specify function, analytic except at isolated poles, that represents the frequency domain property of interest. Such a function can be represented by a ratio of two polynomials, a reduced order model, which can be considered to be a variant of model abstraction. A procedure is outlined for determining the order of the polynomials and their coefficients using the methods of Singular Value Decomposition and Total Least Squares. The method is applied to a selected set of frequency domain problems to illustrate the accuracy and versatility of the method.
Generation of field patterns through model-based parameter estimation
Alexander M. Schmidt, Thomas A. Karle, Donald R. Pflug
Since the performance of any antenna is altered when placed on an aircraft, the need to accurately characterize the antenna system including the platform effects is critical. The process of experimentally identifying the platform effects of an antenna is difficult and time consuming, since it involves a very large number of measurements of the radiated/scattered fields. This paper identifies a new model based estimation technique that will reduce the number of measurements needed to characterize the antenna system. The model which is determined from a set of field measurements captures the essence of the behavior of the antenna system. Additional field pattern data is then attainable from the model. The new technique is demonstrated on two systems and the results are compared with Prony's estimation technique.
New IRFPA device model
Glenn T. Hess, Davy Dai, Thomas J. Sanders, et al.
Existing infrared IRFPA models lack simplicity for setting up the detector's architecture/structure and lack continuity between IR detector material, IR detector processes, detector architecture, and detector operation. Existing models also lack the ability to reveal spatially and quantitatively the full impact of the detector's material, process and architecture on IRFPA performance. This paper discusses the development of a new IRFPA computer model used to simulate existing and future IRFPA's. This model is the first model that evaluates the IR sensor system at the device physics level and provides enhanced quantitative and visual information allowing the device engineer to determine the impact of material quality, processing procedures and IR detector architecture on IRFPA performance in the SWIR-VLWIR region. This new model is combined with powerful statistical techniques that predict IRFPA performance as well as cost. Operation under virtually unlimited user specified conditions allows the engineer to project the performance of a newly designed IRFPA prior to fabrication. The complete model provides outputs at both the device physics and detector level. When interfaced with NVESD's FLIR92 and ACQUIRE, the model provides the ability to analyze effects at the device level of the detector that impact outputs at the system level such as NETD and range.
Model Abstraction and Mixed-Resolution Modeling
icon_mobile_dropdown
Dynamic refinement of a campaign simulation
Thomas C. Fall
In a project sponsored by AFRL/IFSB, the Dynamic Focusing Architecture (DFA) tool is being used to guide when and where more fidelity in a campaign model would be visible. DFA predicts ranges of outcomes; for those of most consequence, it traces back to which components were most responsible. Those components become candidates for finer level simulation. There is some commonality of intent between this and SimPath, being done by Gong, Ho and Gilmer, which does trajectory management by clustering trajectories in discrete event simulations. The DFA approach clusters outcomes from time stepped simulations, but the goals are similar. This paper will discuss the two approaches as well as the current status of this effort.
Modeling and Simulation Methodologies and Frameworks
icon_mobile_dropdown
Ensuring thorough comparison of modeling and simulation (M&S) results with experimental and test observations
Donald Caughlin
The Verification, Validation and Accreditation (VV&A) process is integral to M&S development. Robust Verification and Validation (V&V) provides the confidence in the estimate of system performance derived from the simulation. There are five fundamental steps in the VV&A process: Requirements Validation; Conceptual Model Validation; Design Verification; Implementation Verification; and Results Validation. This paper addresses issues associated with Results Validation which is a comparison of M&S predictions with experimental (test) observations (measurements) for the purpose of ensuring the fidelity of the M&S representations of the system. Comparison of model predictions with test observations (measurements) may or may not be a straightforward process. We discuss the comparison of the experimental results (e.g. a flight test) with the output of a model (as represented by a simulation), we consider the elements of the `system' and `model,' and directly address the impact of the model abstraction techniques employed.