Show all abstracts
View Session
- Section
Section
Hardware-in-the-loop environment facility to address pilot-vehicle-interface issues of a fighter aircraft
Meenige Pandurangareddy
Show abstract
The evolution of Pilot-Vehicle-Interface (PVI) of a fighter aircraft is a complex task. The PVI design involves both static and dynamic issues. Static issues involve the study of reach of controls and switches, ejection path clearance, readability of indicators and display symbols, etc. Dynamic issues involve the study of the effect of aircraft motion on display symbols, pilot emergency handling, situation awareness, weapon aiming, etc. This paper describes a method of addressing the above issues by building a facility with cockpit, which is ergonomically similar to the fighter cockpit. The cockpit is also fitted with actual displays, controls and switches. The cockpit is interfaced with various simulation models of aircraft and outside-window-image generators. The architecture of the facility is designed to represent the latencies of the aircraft and facilitates replacement of simulation models with actual units. A parameter injection facility could be used to induce faults in a comprehensive manner. Pilots could use the facility right from familiarising themselves with procedures to start the engine, take-off, navigate, aim the weapons, handling of emergencies and landing. This approach is being followed and further being enhanced on Cockpit-Environment-Facility (CEF) at Aeronautical Development Agency (ADA), Bangalore, India.
Improved obstacle detection using passive ranging
Show abstract
Boeing-SVS (BSVS) has been developing a Passive Obstacle Detection System (PODS) under a Small Business Innovative Research (SBIR) contract with NAVAIR. This SBIR will provide image-processing algorithms for the detection of sub-pixel curvilinear features (i.e., power lines, poles, suspension cables, etc). These algorithms will be implemented in the SBIR to run on custom processor boards in real time. As part of the PODS development, BSVS has conducted a study to examine the feasibility of incorporating a passive ranging solution with the obstacle-detection algorithms. This passive ranging capability will not only provide discrimination between power lines and other naturally occurring linear features, but will also provide ranging information for other features in the image. Controlled Flight Into Terrain (CFIT) is a leading cause of both military and civil/commercial rotorcraft accidents. Ranging to other features could be invaluable in the detection of other obstacles within the flight path and therefore the prevention of CFIT accidents. The purpose of this paper is to review the PODS system (presented earlier) and discuss several methods for passive ranging and the performance expected from each of the methods.
Real-time image fusion: a vision aid for helicopter pilotage
Moira I. Smith,
Adrian N. Ball,
David Hooper
Show abstract
This paper reports on the development of a real-time image fusion demonstration system using COTS technology. The system is designed to operate in a highly dynamic helicopter environment and across a challenging variety of operational conditions. The targeted application for the demonstrator system is low-level helicopter nap-of-the-earth flight, with particular emphasis being placed on providing improved pilot vision for an increased situation awareness capability. This work provides key technology assessment for the UK Ministry of Defense's Day/Night All Weather helicopter program. The current operation requirement is to fuse imagery from two sensors - one a thermal imagery, the other an image intensifier or visible band camera. However, provision has been made within the software and hardware architectures to support scaling to different cameras and further processors. Over the past two years the research has matured from algorithmic development and analysis using pre-registered recorded imagery to the current real-time image fusion demonstrator system which performs registration and warping prior to an adaptive image processing control scheme. This paper concentrates on the design and hardware implementation issues associated with the process of moving from experimental non real-time algorithms to a real-time image fusion demonstrator. Background information about the program is provided to help put the current system in context, and a brief overview of the algorithm set is also given. The design and hardware implementation issues associated with this scheme are discussed and results from initial field trials are presented.
Database verification using an imaging sensor
Show abstract
In aviation, synthetic vision systems produce artificial views of the world to support navigation and situational awareness in poor visibility conditions. Synthetic images of local terrain are rendered from a database and registered through the aircraft navigation system. Because the database reflects, at best, a nominal state of the environment, it needs to be verified to ensure its consistency with reality. This paper presents a technique for real-time verification of databases using a single imaging device, of any type. It is differential and as such, requires motion of the sensor. The geometric information of the database is used to predict how the sensor image should change. If the measured change is different from the predicted change, the database geometry is assumed to be incorrect. Geometric anomalies are localized and their severity is estimated in absolute terms using a minimization process. The technique is tested against real flight data acquired by an helicopter to verify a database consisting of a digital elevation map. Results show that geometric anomalies can be detected and that their location and importance can be evaluated.
Display requirements for enhanced situational awareness during take-off
David Zammit-Mangion,
Martin Eshelby
Show abstract
As an aircraft progresses down the runway, it is often operating close to its limits of performance and consequently safety margins are low. During the manoeuver, the pilot holds the ultimate decision for any action and hence the quality and level of his situational awareness are critical to the safe continuation of flight. This is reflected in the crew's tasks of monitoring the aircraft and its systems during take-off. Despite the introduction of refined procedures and improved cockpit systems and displays, certain information that can be crucial to safety is still limited and in certain cases even unavailable. This paper analyses the current situation in the cockpit in the take-off environment, focusing on the significance of providing performance-related information to the crew. Options available with current technologies are discussed and requirements that support realizable concepts for new displays enhancing situational awareness in aircraft performance during take-off are suggested.
Generic experimental cockpit for evaluating pilot assistance systems
Show abstract
The workload of aircraft crews, especially during taxiing, take-off, approach and landing under adverse weather conditions has heavily increased due to the continuous growth of air traffic. New pilot assistance systems can improve the situational awareness of the aircrew and consequently increase the safety and reduce the workload. For demonstration and human factor evaluation of such new systems the DLR has built a Generic Experimental Cockpit Simulator equipped with a modern glass-cockpit collimated display. The Primary Flight Display (PFD), the human machine interface for an Advanced Flight Management System (AFMS), a Taxi Guidance System called Taxi and Ramp Management and Control (TARMAC) and an Enhanced Vision System (EVS) based on real time simulation of MMWR and FLIR sensors are integrated into the cockpit on high resolution TFT touch screens. The situational awareness is further enhanced by the integration of a raster/stroke capable Head-Up Display (HUD). It prevents the pilot's eye from permanent accommodation between the Head-Down Displays and the outside view. This contribution describes the technical implementation of the PFD, the Taxi Guidance System and the EVS onto the HUD. The HUD is driven by a normal PC, which provides the Arinc data for the stroke generator and the video signal for the raster image. The PFD uses the built-in stroke generator and is working under all operations. During taxi operations the cleared taxi route and the positions of other aircraft are displayed via raster. The images of the real time simulation of the MMWR and FLIR Sensors are presented via raster on demand. During approach and landing a runway symbol or a 3D wire frame database is shown which exactly matches the outside view and obstacles on the runway are highlighted. The runway position is automatically calculated from the MMWR Sensor as reported in previous contributions.
DEM integrity monitor experiment (DIME) flight test results
Show abstract
This paper discusses flight test results of a Digital Elevation Model (DEM) integrity monitor. The DEM Integrity Monitor Experiment (DIME) was part of the NASA Synthetic Vision System (SVS) flight trials at Eagle-Vail, Colorado (EGE) in August/September, 2001. SVS provides pilots with either a Heads-down Display (HDD) or a Heads-up Display (HUD) containing aircraft state, guidance and navigation information, and a virtual depiction of the terrain as viewed 'from the cockpit'. SVS has the potential to improve flight safety by increasing the situational awareness (SA) in low to near zero-visibility conditions to a level of awareness similar to daytime clear-weather flying. This SA improvement not only enables low-visibility operations, but may also reduce the likelihood of Controlled Flight Into Terrain (CFIT). Because of the compelling nature of SVS displays high integrity requirements may be imposed on the various databases used to generate the imagery on the displays even when the target SVS application does not require an essential or flight-critical integrity level. DIME utilized external sensors (WAAS and radar altimeter) to independently generate a 'synthesized' terrain profile. A statistical assessment of the consistency between the synthesized profile and the profile as stored in the DEM provided a fault-detection capability. The paper will discuss the basic DIME principles and will show the DIME performance for a variety of approaches to Runways 7 and 25 at EGE. The monitored DEMs are DTED Level 0, USGS with a 3-arcsec spatial resolution, and a DEM provided by NASA Langley. The test aircraft was a Boeing 757-200.
Aerial monitoring and measurement of forest fires
Show abstract
This paper presents a system for forest fire monitoring using aerial images. The system uses the images taken from a helicopter, the GPS position of the helicopter, and information from a Geographic Information System (GIS) to locate the fire and to estimate in real-time their properties. Currently, the images are taken by a non-stabilized camera. Then, image processing for image stabilization and movement estimation is applied to cancel the vibration and to estimate the change in the camera orientation. Another image processing stage is the computation of the fire front and flame height features in the images. This process is based on color processing and thresholding, followed by contour computation. Finally, the fire front is automatically geo-located by projecting the features over the terrain model obtained from the GIS. Furthermore, an estimation of the flame height is obtained. The aerial image processing, automatic georeferencing and measurement has been integrated in a forest fire fire monitoring system in which several moving or fixed visual and infrared cameras can be used. The system provides in real-time the evolution of the fire-front and the flame height, and obtains a 3D perception model of the fire. The paper shows some results obtained with the application with images taken in real forest-fire experiments, in the framework of the INFLAME project funded by the European Commission.
Obstacle detection algorithms for rotorcraft navigation
Show abstract
Wires can be hardly visible and thus present a serious hazard to rotorcrafts flying at low altitudes. Vision systems capable of detecting wires in time to avoid collisions must be able to find in the input images curves less than one or two pixels wide. This paper describes a study on the performance of wire detection using a sub-pixel edge detector algorithm proposed by Steger. This algorithm was tested on a set of images synthetically generated by combining real outdoor images with computer generated wire graphics. Its performance was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Efficient management of terrain elevation and feature data for 3D guidance displays
Klaus Nothnagel,
Gottfried Sachs
Show abstract
This paper presents an efficient approach for rendering terrain elevation and feature data applicable to cockpit displays presenting guidance information in a 3-dimensional format. A primary goal is to achieve a real-time capability with a high update rate (25-30 images per sec) using a low-cost PC based computer. Basically, the data management approach consists of a hierarchical adaptive triangulation technique for dealing with the terrain elevation data in the memory. The interpolation hierarchical surplus is used as a criterion to detect redundant data and serves to control the error of the terrain approximation. A high frame rate can be reached with this level-of-detail mechanism, including an easy visibility check due to the hierarchical adaptive strategy. Furthermore, a special organization of the data is developed in order to reduce the required memory and to manage a large database. For this purpose, parts of the data can be exchanged during runtime with the use of a particular data format on harddisc. On multiprocessor computer architectures, the re-loading data process can be performed in the background without interrupting the rendering by starting parallel running tasks (threads) while on single processor architectures a scheduler is sharing CPU time among the graphic and reloading processes.
Fuzzy-logic-based sensor fusion of images
Show abstract
The fusion of visual and infrared sensor images of potential driving hazards in static infrared and visual scenes is computed using the Fuzzy Logic Approach (FLA). The FLA is presented as a new method for combining images from different sensors for achieving an image that displays more information than either image separately. Fuzzy logic is a modeling approach that encodes expert knowledge directly and easily using rules. With the help of membership functions designed for the data set under study, the FLA can model and interpolate to enhance the contrast of the imagery. The Mamdani model is used to combine the images. The fused sensor images are compared to metrics to measure the increased perception of a driving hazard in the sensor-fused image. The metrics are correlated to experimental ranking of the image quality. A data set containing IR and visual images of driving hazards under different types of atmospheric contrast conditions is fused using the Fuzzy Logic Approach (FLA). A holographic matched-filter method (HMFM) is used to scan some of the more difficult images for automated detection. The image rankings are obtained by presenting imagery in the TARDEC Visual Perception Lab (VPL) to subjects. Probability of detection of a driving hazard is computed using data obtained in observer tests. The matched-filter is implemented for driving hazard recognition with a spatial filter designed to emulate holographic methods. One of the possible automatic target recognition devices implements digital/optical cross-correlator that would process sensor-fused images of targets. Such a device may be useful for enhanced automotive vision or military signature recognition of camouflaged vehicles. A textured clutter metric is compared to experimental rankings.
Development and test of a low-cost 3D display for small aircraft
Show abstract
A low-cost 3D-display and navigation system providing guidance information in a 3-dimensional format is described. The system including a LC display, a PC based computer for generating the 3-dimensional guidance information, a navigation system providing D/GPS and inertial sensor based position and attitude data was realized using Commercial-off-the-Shelf components. An efficient computer software has been developed to generate the 3-dimensional guidance information with a high update rate. The guidance concept comprises an image of the outside world as well as a presentation of the command flight path, a predictor and other guidance elements in a 3-dimensional format.
Image processing in an enhanced and synthetic vision system
Show abstract
'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.
Flight test evaluation of tactical synthetic vision display concepts in a terrain-challenged operating environment
Show abstract
NASA's Aviation Safety Program, Synthetic Vision Systems Project is developing display concepts to improve pilot terrain/situational awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft position information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. A flight test evaluation of tactical Synthetic Vision display concepts was recently conducted in the terrain-challenged operating environment of the Eagle County Regional Airport. Several display concepts for head-up displays and head-down displays ranging from ARINC Standard Size A through Size X were tested. Several pilots evaluated these displays for acceptability, usability, and situational/terrain awareness while flying existing commercial airline operating procedures for Eagle County Regional Airport. All tactical Synthetic Vision display concepts provided measurable increases in the pilot's subjective terrain awareness over the baseline aircraft displays. The head-down display presentations yielded better terrain awareness over the head-up display synthetic vision display concepts that were tested. Limitations in the head-up display concepts were uncovered that suggest further research.
Hybrid enhanced and synthetic vision system architecture for rotorcraft operations
Norah K. Link,
Ronald V. Kruk,
David McKay,
et al.
Show abstract
The ability to conduct rotorcraft search and rescue (SAR) operations can be limited by environmental conditions that affect visibility. Poor visibility compromises transit to the search area, the search for the target, descent to the site and departure from the search area. In a collaborative program funded by the Canadian Department of National Defence, CAE and CMC Electronics designed, and together with the Flight Research Laboratory of the National Research Council of Canada integrated and flight-tested an enhanced and synthetic vision system (ESVS) to examine the potential of the concept for SAR operations. The key element of the ESVS was a wide field-of-view helmet-mounted display which provided a continuous field-of-regard over a large range of pilot head movements. The central portion of the display consisted of a head-slaved sensor image, which was fused with a larger computer generated image of the terrain. The combination of sensor and synthetic imagery into a hybrid system allows the accurate detection of obstacles with the sensor while the synthetic image provides a continuous high-quality image, regardless of environmental conditions. This paper presents the architecture and component technologies of the ESVS 2000 TD, as well as lessons learned and future applications for the hybrid approach.
Mass data graphics requirements for symbol generators: example 2D airport navagation and 3D terrain function
Show abstract
Next generation of cockpit display systems will display mass data. Mass data includes terrain, obstacle, and airport databases. Display formats will be two and eventually 3D. A prerequisite for the introduction of these new functions is the availability of certified graphics hardware. The paper describes functionality and required features of an aviation certified 2D/3D graphics board. This graphics board should be based on low-level and hi-level API calls. These graphic calls should be very similar to OpenGL. All software and the API must be aviation certified. As an example application, a 2D airport navigation function and a 3D terrain visualization is presented. The airport navigation format is based on highly precise airport database following EUROCAE ED-99/RTCA DO-272 specifications. Terrain resolution is based on EUROCAE ED-98/RTCA DO-276 requirements.
Multiresolution terrain depiction and airport navigation function on an embedded SVS
Show abstract
Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly, to enhance a pilot's situational awareness with future 3D synthetic vision systems, accurate, reliable, and hi-resolution terrain databases are required to offer a realistic and reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a method for adaptive terrain meshing and depiction for SVS is presented. The initial dat set is decomposed by using wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various level-of-detail is determined at runtime. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain dat. This representation enhances fast interactive computations and real-time rendering methods. For the integrated airport navigation function an airport mapping database compliant to the new DO-272 standard is processed and integrated in the realized system. The used airport database contains precise airport vector geometries with additional object attributes as background information. In conjunction these data set can be used for various airport navigation functions like automatic taxi guidance. Booth, the multi-resolution terrain concept and airport navigation function are integrated into a high-level certifiable 2D/3D scene graph rendering system. It runs on an aviation certifiable embedded rendering graphics board. The optimized combination of multi- resolution terrain, scene graph, and graphics boards allows it to handle dynamically terrain models up to 1 arc second resolution. The system s and dat processing acknowledges certification rules based on DO-178B, DO-254, DO-200A, DO- 272, and DO-276.
Flitedeck 3D on the MX20
Show abstract
Today's Jeppesen approach charts depict a plan view approach chart that includes approach procedure, terrain, obstacle, and airport information. The vertical profile of the procedure is replicated in a chart underneath. TO integrate plan view and vertical chart elements, a 3D display format was realized. It combines all geometric chart elements into a Synthetic Vision System. It depicts 3D tunnel in the sky, terrain, obstacles, and airport information. Color coding is identical to Jeppesen charts. Symbology is extruded from 2D chart symbology. This approach might be excellent for pilots acquainted to the conventional charts.
Synthetic vision as an integrated element of an enhanced vision system
Show abstract
Enhanced Vision Systems (EVS) and Synthetic Vision Systems (SVS) have the potential to allow vehicle operators to benefit from the best that various image sources have to offer. The ability to see in all directions, even in reduced visibility conditions, offers considerable benefits for operational effectiveness and safety. Nav3D and The Boeing Company are conducting development work on an Enhanced Vision System with an integrated Synthetic Vision System. The EVS consists of several imaging sensors that are digitally fused together to give a pilot a better view of the outside world even in challenging visual conditions. The EVS is limited however to provide imagery within the viewing frustum of the imaging sensors. The SVS can provide a rendered image of an a priori database in any direction that the pilot chooses to look and thus can provide information of terrain and flight path that are outside the purview of the EVS. Design concepts of the system will be discussed. In addition the ground and flight testing of the system will be described.