Proceedings Volume 2740

High-Fidelity Simulation for Training, Test Support, Mission Rehearsal, and Civilian Applications

cover
Proceedings Volume 2740

High-Fidelity Simulation for Training, Test Support, Mission Rehearsal, and Civilian Applications

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 June 1996
Contents: 5 Sessions, 21 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing and Controls 1996
Volume Number: 2740

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Systems for Simulation
  • Human Factors in Simulation
  • Simulation and Database Development Algorithms
  • Civilian Applications of Simulation
  • Simulation and Database Development Algorithms
  • Military Applications in Simulation
Systems for Simulation
icon_mobile_dropdown
RapidScene photogrammetry and visualization system
Tom Mackowiak
A system has been developed which combines the accuracy of digital photogrammetry with simulation quality rendering to create a 3D imagery visualization system. Users input various types of imagery: aerial survey photos, Landsat, SPOT or other overhead imagery, and transform the 2D images into a true 3D database. Once the database is built the user can view the 3D world form any perspective and produce various output products including a video 'fly-through' suitable for briefings. The Rapid Scene system offers a new way to construct a high quality 3D database, to use imagery to build the entire database, including terrain, features and texture. The benefits of this method include faster database production time, increased accuracy and realism. Key features of the system include the ability to combine imagery or digital sources, extract terrain data from stereo imagery, texture extracted features in an automatic fashion, create levels-of-detail to speed rendering, and allow placement of generic features and moving models in the database.
Advancements in related technologies bring virtual reality to GIS
Craig Erikson, Wade Hundley
Because of technological limitations, commercial software vendors have been unable to develop a real time 3-D visualization and data analysis package that could be widely used by the remote sensing and GIS community. REcent advancements in the areas of 3-D hardware, virtual environment research,and software standards have allowed for the development of a low cost virtual environment package for the first time. Products such as VirtualGIS will allow users to not only visualize their data in 3-D, but also do their analysis in 3-D. This and other virtual environment products may pave the path to high growth and acceptance by the general public for ImagingGIS software solutions.
Voxel volumes visualization system
Sergei I. Vyatkin, Valerie V. Ovechkin
Proposed visual system supports a number of advanced requirements, including very rapid and highly automated development of visual environment, real-world terrain decorated with high-resolution color-texture, 3D-texture for volumes, advanced atmospheric effects, shading and shadows, large numbers of moving objects, 3D-morphing, animation, deformations of surfaces and volumes. The system supports features such as area-of-interest and wide field-of-view. We present method for the generation of photo-realistic images of 3D-terrain datasets by mapping a digital aerial photographs on a perspective projection of a digital elevation map. In our method we depart from traditional techniques of defining and processing geometrical primitives, since there are problems which cannot be solved simply by brute-force approach. Another feature of the proposed method is the possibility to define dynamic objects like waves or surface movements by defining time dependent function which alters scalar field values. As a result, one can visualize propagating waves and surface deformations. The possibility to render both surfaces and volumetric data defined by 3D-grid with parameters like opacity, color, etc. in the grid points is a principle feature of the proposed approach.
4D symbology for sensing and simulation
Gregory Turner, Jacques Haus, Gregory Newton, et al.
The Army's Common Picture of the Battlefield will produce immense amounts of data associated with tactical goals and options, dynamic operations, unit and troop movement, and general battlefield information. These data will come form sensors (in real-time) and from simulations and must be positioned accurately on high-fidelity 3-D terrain. This paper is associated with the Army's 2-D symbols for operations and tactics so that the information content of this symbolic structure is retained. A hierarchy is developed based on military organization to display this symbology. Using this hierarchy, even complex battlefield scenarios can be displayed and explored in real-time with minimal clutter. The user may also move units around by direct manipulation, define paths, create or delete hierarchical elements, and make other interactions. To strengthen the capacity for distributed simulations and for using sensor information from multiple sources, DIS capability has been integrated with the symbology for dynamic updates of position, direction and speed, and hierarchical structure. This paper will also discuss how the techniques used here can be applied to general (non-military) organizational structures.
OpenGL VGIS
Nickolas L. Faust, Dharmajyoti Bhaumik, Larry F. Hodges, et al.
Georgia Tech has developed the Virtual GIS (VGIS) system, a real time visualization system for terrain, image, and geographic information systems (GIS) data sets. The initial systems developed at Georgia Tech were non- realtime, but had fast generation of perspective scenes from multisources data sets and the ability to query for GIS attributes associated with terrain of 3D structures inserted within the terrain. The basic concept of a virtual GIS was implemented in realtime using the Silicon Graphics International graphics language. This system has been extended in capability to allow realtime traversal within a very large geographic database and to show the finest detail information available when it is near to the view point. Extensive work has been done in the management of large arrays of information and the efficient paging of that information into the rendering system. An effective level of detail management system is implemented to dynamically allocate the appropriate amount of detail relative to the viewer location. A major use of this system has been in the area of battlefield visualization. The advent of OpenGL as a defacto standard has now made it possible to provide the VGIS capacity on a number of other platforms, thereby extending its usefulness to other applications and users. OpenGL has been developed as a general purpose Graphics rendering toolkit that will be supported on various computers and special purpose rendering systems. There are hardware and software implementations of OpenGL. This should allow VGIS to operate on many systems, taking advantage of specialized graphics hardware when it is present. This paper addresses the implementation of the VGIS system in OpenGL and the use of the system in driving the Evans and Sutherland Freedom series graphics rendering hardware.
Human Factors in Simulation
icon_mobile_dropdown
Unique high-fidelity wraparound driving simulator for human factors research applications
Olukayode Olofinboba
A high-fidelity simulation facility used primarily for human factors research in driving worlds is described. Driving simulation has always been plagued by the struggle to achieve a reasonable amount of realism. It has image quality requirements that are as demanding as those of low-altitude flying simulation. In addition, driving simulators have computer intensive requirements for update rates and image delays to avoid loss of operator control. The wrap- around simulator project (WASP) was initiated at the University of Minnesota with the goal of creating a unique high-fidelity driving simulator that addressed the problems associated with earlier simulators. It was designed mainly for use in human factors research inside driving environments, though the possibility of expanding it to limited flying worlds exists. The resulting facility is capable of providing a 360 degree horizontal field of view to subjects, hence its name. It uses powerful graphics and data collection computers to maintain desired image quality, update frequency, image delay, and data collection frequency characteristics. The WASP is currently being used by human factors researchers to examine phenomena that were formerly unobservable in most driving simulators.
Advanced simulation technology used to reduce accident rates through a better understanding of human behaviors and human perception
Michael P. Manser, Peter A. Hancock
Human beings and technology have attained a mutually dependent and symbiotic relationship. It is easy to recognize how each depends on the other for survival. It is also easy to see how technology advances due to human activities. However, the role technology plays in advancing humankind is seldom examined. This presentation examines two research areas where the role of advanced visual simulation systems play an integral and essential role in understanding human perception and behavior. The ultimate goal of this research is the betterment of humankind through reduced accident and death rates in transportation environments. The first research area examined involved the estimation of time-to-contact. A high-fidelity wrap-around simulator (RAS) was used to examine people's ability to estimate time-to- contact. The ability of people to estimate the amount of time before an oncoming vehicle will collide with them is a necessary skill for avoiding collisions. A vehicle approached participants at one of three velocities, and while en route to the participant, the vehicle disappeared. The participants' task was to respond when they felt the accuracy of time-to-contact estimates and the practical applications of the result. The second area of research investigates the effects of various visual stimuli on underground transportation tunnel walls for the perception of vehicle speed. A RAS is paramount in creating visual patterns in peripheral vision. Flat-screen or front-screen simulators do not have this ability. Results are discussed in terms of speed perception and the application of these results to real world environments.
Development of a simulator to investigate pilot decision making in free flight
Stephen F. Scallen, Kip Smith, Peter A. Hancock
In response to the deterioration of ATC technology, the Federal Aviation Administration (FAA) has initiated a program of study to determine the implications of a distributed control structure, 'free-flight', in which pilots would be given authority for navigation and routing decisions. This paper discusses a simulator developed to define constraints on safe and effective pilot decision-making in the proposed 'free-flight' structure. The simulator's design goals were the detailed reproduction of cockpit navigation displays, real-time updating of airspace information, and the flexibility to support dynamic manipulations of the environment. The simulator is housed in the fuselage of a single engine aircraft and supports modern glass-cockpit instrumentation including a primary flight display, a navigation display with proximity warning system, a flight management system display with keyboard input device, and numerous control switches. Unique software abilities includes data collection, data analysis, and data playback. A console control workstation also allows the dynamic manipulation of drone aircraft in simulated air traffic scenarios. At runtime the simulator captures pilot control actions and the location of all traffic.
Navigation in virtual environments
Erik Arthur, Peter A. Hancock, Susan Telke
Virtual environments show great promise in the area of training. ALthough such synthetic environments project homeomorphic physical representations of real- world layouts, it is not known how individuals develop models to match such environments. To evaluate this process, the present experiment examined the accuracy of triadic representations of objects having learned them previously under different conditions. The layout consisted of four different colored spheres arranged on a flat plane. These objects could be viewed in either a free navigation virtual environment condition (NAV) or a single body position virtual environment condition. The first condition allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe form a single viewpoint. These viewing conditions were a between-subject variable with ten participants randomly assigned to each condition. Performance was assessed by the response latency to judge the accuracy of a layout of three objects over different rotations. Results showed linear increases in response latency as the rotation angle increased from the initial perspective in SBP condition. The NAV condition did not show a similar effect of rotation angle. These results suggest that the spatial knowledge acquisition from virtual environments through navigation is similar to actual navigation.
Simulation and Database Development Algorithms
icon_mobile_dropdown
Parallel-distributed mobile robot simulator
Hiroyuki Okada, Minoru Sekiguchi, Nobuo Watanabe
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Civilian Applications of Simulation
icon_mobile_dropdown
Constructive simulation for emergency management training
Mikel D. Petty, Mary P. Slepow
The Plowshares project applied military constructive simulation technology to training for emergency management. The project team enhanced the US Army's Janus simulation model to support emergency management scenarios that include hurricanes, fires, and chemical spills. The enhanced Janus software, known as TERRA, can be used in a county emergency operations center to provide the stimulus for training events structured as command post exercises. The first phase of the project culminated in a 'proof of principle demonstration' that occurred in August 1995. In that demonstration the emergency operations center of Orange County Florida conducted a hurricane response exercise using the TERRA system.
Development of a spatial information database to facilitate mitigation of flood damages resulting from tropical storm Alberto in southwest Georgia, July 1994
Nickolas L. Faust, J. W. Musser, S. J. Alhadeff, et al.
Flooding in excess of 100-year recurrence interval streamfiows occurred in Georgia along the Flint and Ocmulgee Rivers and their tributaries in July 1994 as a result of rainfall from Tropical Storm Alberto. In order to facilitate mitigation of flood damages, a variety of spatial information was required by Federal, state, regional, and local agencies as well as by utility companies. An interagency spatial information team was assembled immediately following the flood at the request of the Federal Emergency Management Agency. The U.S. Geological Survey led the formation of this team, which consisted of representatives from 22 Federal, State, and local agencies. This team rapidly constructed spatial-data sets for use by team participants and other entities assisting communities with flood-recovery efforts. In the aftermath of the devastating floods in the Mississippi River basin in 1993, a similar interagency floodrecovery team was formed to identify and supply spatial-data sets using geographical information system (GIS) technology. Using the interagency team formed after the Mississippi flood as an example, the interagency team in Georgia identified and assembled spatial-data sets needed to delineate flood extent, estimate flood-recurrence intervals at selected stream locations, support short- and long-term recovery plans, and document the historic flood event caused by Tropical Storm Alberto. Currently, spatial data collected by the USGS and regional flood-estimating equations are being used to develop experimental flood-extent models. When coupled with computerized three-dimensional visualization tools under development by the Georgia Tech Research Institute staff, a flood-extent model could depict a flooded area in a readily understandable manner. If successful, this experiment could lead to calibrated flood-extent models suitable for coupling with real-time stream-stage information. Such modeling tools could be useful for monitoring flood extent in real-time and for making flood-extent warnings and predictions. One important use of visual presentations of flood-extent model outputs would be to convey the effects of flooding on homes, businesses, agriculture, and civil infrastructure to the public. KEY WORDS: Flooding, Georgia, spatial data, geographical information system
Stimulation and visualization of Martian rover
William Lincoln
Mars Pathfinder, launching in December 1996 and landing July 4, 1997, will demonstrate a low-cost delivery system to the surface of Mars. A rover will be deployed to perform mobility tests, image its surroundings, and place a spectrometer against rocks to make elemental composition measurements. After impact on the surface of Mars, the lander will deploy its three solar panels for power, the camera will view the surroundings and the rover will be positioned for deployment to the surface. First, the lander will transmit the engineering and science data collected during descent through Mars' thin atmosphere. Then its camera will take a panoramic image of its surroundings and begin transmitting it directly to Earth at a few hundred bits per second. The rover, which will have been carried in a stowed configuration with the body lowered, will extend to its full height before it leaves the lander. It will roll down a deployment ramp to the surface and will then be independent except for using the lander data and communications functions for contact with Earth. After the lander transmits its engineering data and panorama image to Earth, much of its mission will be focused on supporting the rover with imaging telecommunications and data storage. The rover, named Sojourner, has a rocker-bogie suspension system. A computer generated image of the rover is shown in figure 1. The rocker-bogie suspension system utilizes a six-wheel drive platform without axles or springs. The system kinematically adapts to terrain geometry and can negotiate obstacles twice the wheel diameter. Precise steering is performed with steering actuators mounted above the four outer wheels. The rover can turn in place. The Martian environment is, to a great degree, uncertain. Detailed information about the terrain's grades and soil characteristics are not available. As such, the behavior of the rover on the surface of Mars is unknown.
Simulation and Database Development Algorithms
icon_mobile_dropdown
Integrated-area and feature-matching approach to the automated extraction of digital elevation matrices
James J. Pearson, Neil F. Carter, Fidel Paderes Jr.
Widespread use of detailed 3D databases for simulation and other applications is currently impeded by the cost of the labor-intensive database generation process. Improvements to the efficiency of the process will involve many changes, especially increases in the degree of automation. Perhaps equally important will be process changes, such as the elimination of duplicative steps and the tighter integration of the remaining steps. This paper describes an approach to this integration in the area of terrain elevation extraction and illustrates it with suggestive examples.
Development and application of an object-oriented graphical environment for the simulation of space-based sensing systems
Brian Barnhardt, Sean Rucker, David A. Bearden, et al.
The simulation of developing complex systems requires flexibility to allow for changing system requirements and constraints. The object-oriented paradigm provides an environment suitable for establishing flexibility, rapid reconfiguration of new architectures, and integration of new models. This paper outlines the development and application of the brilliant eyes simulator (BESim), sponsored by the US Air FOrce Space and Missile Systems Center. BESim simulates the Space and Missile Tracking System, formerly known as Brilliant Eyes, which represents the low earth orbiting component of the space based infrared system. BESim has powerful tools for simulation setup and analysis of results. The pre-processor enables the user to specify system characteristics, output data collection, external data interfaces, and modeling fidelity. The post-processor consists of a graphical user interface which allows easy access to all simulation output in graphical or tabular form. This includes 2D and 3D graphical playback of performance results.
Continuous adaptive terrain modeling for DIS and other simulation applications
Carl Suttle
Interoperability across heterogeneous simulators provides challenges for visual simulations and visual databases within those simulators. Advances in visual database modeling capability are required and possible because of increased processing and graphics power of image generators. Old and new modeling capabilities and design constraints discussed in this paper provide insight into how interoperability may become less challenging for OpenFlight format database applications.
Military Applications in Simulation
icon_mobile_dropdown
High-fidelity infrared scene simulation at Georgia Tech
Albert D. Sheffer Jr., J. Michael Cathcart, Nickolas L. Faust
The Georgia Tech Research Institute has for more than fifteen years developed and used digital scene models for IR simulation applications. Initially focusing on synthetic scenes of small extent but very high resolution (less than one meter), more recently emphasis has shifted to larger scenes derived from measured data sources with resolution at one meter or slightly greater. One reason for the shift in emphasis has been the emergence of the GTSIMS simulation environment, in which digital IR seeker and missile models and models of other EO/IR sensor systems used in tactical missile engagement scenarios require larger scene extents (typically three to ten kilometers on a side) because of their potential viewing geometries and fields of view. In GTSIMS these sensor and missile models are integrated in a unified software system with the IR scene models and the image rendering software that has been developed along with them. The GTSIMS missile engagement capabilities, including many aspects of scene configuration and signature prediction, are tied together through a graphical user interface called XGTSIMS. This paper will discuss recent IR scene models developed for GTSIMS, from the methodologies used to create the data sets behind the models to the use of these models in GTSIMS via XGTSIMS, then will proceed to discuss current and planned efforts toward real-time image generation of large, complex scenes for IR simulation purposes.
BEAMS cloud model for high-fidelity simulations
Sean G. O'Brien, John C. Giever, Steven J. McGee
The BEAMS (battlefield emission and multiple scattering) model is a 26-stream radiative transfer algorithm for the prediction of diffuse radiance in finite 3D non-uniform aerosol clouds. It has been developed by the US Army Research Laboratory as a visualization tool that satisfies the need for visual simulations to show realistic variation of a cloud with varying viewing angle, cloud geometry, optical depth, and external incident radiation.Uses of this high fidelity model for simulating propagation effects in a realistic 3D terrain environment are shown. Examples are shown in the visible wavelength for both daytime scenarios and nighttime scenarios (where the cloud is artificially illuminated by flares). Other examples are shown for infrared wavelengths, which use the GTSIMS model for terrain radiance prediction. Performance issues of cloud rendering are also discussed, including modeling the cloud with performer billboards on the silicon graphics computers.
Dynamic environmental effects model: a high-fidelity simulation system with dynamic environmental effects
John H. Christiansen, A. Peter Campbell, John R. Hummel
The dynamic environmental effects model (DEEM) is a sophisticated software architecture used to determine the environmental impacts on military and civilian operations. DEEM is an instantiation of the more general architecture, the dynamic information architecture system (DIAS). DEEM is being used in a number of applications relevant to training, test support, and operational support and decision making in both the DOD and civilian communities. In one application, DEEM is being used to determine the impact of the environment on military operations, such as the trafficability and mobility of forces. DEEM is also being used as the software framework controlling a theater level weather forecast model for the US Sir Force. In another application DEEM has been used as a tool for resource analysis in a disaster relief study. In this paper, we summarize the key features of DEEM and give a number of examples of how it is being used to support military and civilian applications.
Engineering tool kit for incorporating natural environment influences
Sandra K. Weaver, William A. Lanich
Creators of simulations have a myriad of engineering tools to facilitate database creation, signature creation, rendering, visualization, and other essential elements in the simulation process. But the dominant influence on the appearance of the background, target, and path has always been intractable. Now efforts are being considered to develop natural environment engineering tools which will simplify and facilitate the systematic incorporation of phenomenologically-correct environmental effects in simulations. The authors, a meteorologist and a system engineer, describe the few extant tools, efforts to create others, and suggest a forum to define the needs.
Infrared scene simulation to support installed-systems avionics test and evaluation
Peter M. Crane
The US Air Force and Navy are cooperating in a program to increase their capabilities for installed systems avionics test. Complete aircraft will be suspended in very large, shielded, anechoic test chambers. Hardware-in-the- loop and man-in-the-loop tests will be conducted by simulating combat mission scenarios in which aircraft systems will be stimulated as they would be on an actual mission. Current installed systems test facilities are limited to RF threats. The improvements will include radar target generation; communication, navigation, and identification stimulation; and, an infrared scene stimulator (IRSS). The iRSS must generate a realistic, real-time simulation of the IR environment including complex backgrounds, multiple dynamic targets, IR countermeasures, and atmospheric effects. Current computer image generators can create highly detailed, geo-specific, real-time visual imagery. However, high-fidelity, detailed, flexible IR predictions including source phenomenology and atmospheric effects typically require several seconds to several hours per frame. Further, IR predictions require significantly more geo-specific data than is widely available for large gaming areas. The goal of IRSS development, therefore, is to integrate two well-developed technologies: real-time image generation and IR prediction. In this paper, potential solutions and necessary compromises will be discussed and evaluated.