Proceedings Volume 7689

Enhanced and Synthetic Vision 2010

cover
Proceedings Volume 7689

Enhanced and Synthetic Vision 2010

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 April 2010
Contents: 4 Sessions, 17 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2010
Volume Number: 7689

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7689
  • Operations Research
  • Vision Processing
  • Integration Tools
Front Matter: Volume 7689
icon_mobile_dropdown
Front Matter: Volume 7689
This PDF file contains the front matter associated with SPIE Proceedings Volume 7689, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Operations Research
icon_mobile_dropdown
Beyond traffic depiction: conformally integrating the conflict space to support Level 3 situation awareness
Jochum Tadema, Erik Theunissen, Kevin M. Kirk
The research described in this paper explores the addition of conformally integrated traffic probes into an egocentric Synthetic Vision (SV) Primary Flight Display (PFD). The underlying thought is that, although the traffic that is predicted to cause a future loss of separation may not lie within the field of view of the display, the location where the loss of separation is predicted to occur always will. Hence, rather than focusing on the depiction of traffic, which contributes to level 2 Situation Awareness (SA), the concept pursues spatially integrated depiction of the airspace where a loss of separation is predicted. This provides readily actionable conflict information, relieving pilots from the traffic position and conflict estimation task and contributing to level 3 SA. The paper describes the integration of the data from the traffic probe into an SV PFD. The advantages of the concept will be illustrated using several traffic conflict scenarios, including an overtaking scenario involving unmanned aircraft. Given that unmanned aircraft may be markedly slower than manned aircraft which operate within the same airspace, a spatially integrated depiction of airspace where a future loss of separation is predicted, can help to preserve safety in classes of airspace that accommodate both manned and unmanned aircraft. Additionally, examples are provided illustrating how traffic probes can support pilots in monitoring the conformance of traffic to the priority rules of 14 CFR 91.113.
Enhanced vision for all-weather operations under NextGen
Randall E. Bailey, Lynda J. Kramer, Steven P Williams
Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.
Part-task simulation of synthetic and enhanced vision concepts for lunar landing
Jarvis J. Arthur III, Randall E. Bailey, E. Bruce Jackson, et al.
During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were "a major problem." Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot's workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.
Standardization of databases for AMDB taxi routing functions
Input, management, and display of taxi routes on airport moving map displays (AMM) have been covered in various studies in the past. The demonstrated applications are typically based on Aerodrome Mapping Databases (AMDB). Taxi routing functions require specific enhancements, typically in the form of a graph network with nodes and edges modeling all connectivities within an airport, which are not supported by the current AMDB standards. Therefore, the data schemas and data content have been defined specifically for the purpose and test scenarios of these studies. A standardization of the data format for taxi routing information is a prerequisite for turning taxi routing functions into production. The joint RTCA/EUROCAE special committee SC-217, responsible for updating and enhancing the AMDB standards DO-272 [1] and DO-291 [2], is currently in the process of studying different alternatives and defining reasonable formats. Requirements for taxi routing data are primarily driven by depiction concepts for assigned and cleared taxi routes, but also by database size and the economic feasibility. Studied concepts are similar to the ones described in the GDF (geographic data files) specification [3], which is used in most car navigation systems today. They include - A highly aggregated graph network of complex features - A modestly aggregated graph network of simple features - A non-explicit topology of plain AMDB taxi guidance line elements This paper introduces the different concepts and their advantages and disadvantages.
INVIS: integrated night vision surveillance and observation system
We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite consists of three cameras, respectively sensitive in the visual (400-700 nm), the near-infrared (700-1000 nm) and the longwave infrared (8-14 μm) bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned. Image quality of the fused sensor signals can be enhanced in real-time through Dynamic Noise Reduction, Superresolution, and Local Adaptive Contrast Enhancement. The quality of the longwave infrared image can be enhanced through Scene-Based Non-Uniformity Correction (SBNUC), intelligent clustering and thresholding. The visual and near-infrared signals are used to represent the resulting multiband nightvision image in realistic daytime colors, using the Color-the-Night color remapping principle. Color remapping can also be deployed to enhance the visibility of thermal targets that are camouflaged in the visual and near-infrared range of the spectrum. The dynamic false-color nighttime images can be augmented with corresponding synthetic 3D scene views, generated in real-time using a geometric 3D scene model in combination with position and orientation information supplied by the GPS and inertial sensors of the INVIS system.
An evaluation test bed for enhanced vision
DLR's Institute of Flight Guidance is involved in many projects dealing with the development of new concepts for flight procedures and pilot assistance functions. This includes especially the topic of enhanced vision (EVS), where processed data from radar and infrared sensors is utilized to augment the pilot's vision. For evaluating these concepts extensive flight testing has been conducted and results have been published during the last years. Now, DLR has combined its expertise in the field of high performance sensor simulation on the one hand side, together with the visual simulation for its generic cockpit simulator, on the other hand. Sensor simulation of imaging radar, lidar, infrared, etc., is based mainly on the application of high performance functions of modern computer graphics hardware (vertex and fragment shaders). The direct combination of these functions with the "outside-vision" software, which is now based on exactly the same terrain and object geometry, delivers sensor data that perfectly correlate to the visual channel. This combined simulation environment will be the basis for various evaluation trials within the near future, including simulation trails for fixed-wing and rotary-wing applications. The paper presents the implemented software and hardware architecture of the cockpit's visual simulator and its coupling to the sensor simulation test-suite. First results of recently conducted simulation experiments including the evaluation of new proposed flight procedures, which apply EVS technology, are given.
Vision Processing
icon_mobile_dropdown
New adaptive algorithms for real-time registration and fusion of multimodal imagery
J. P. Heather, M. I. Smith
Accurate image registration is a pre-requisite for most systems utilising two or more imaging sensors. This can often be accomplished off-line in the laboratory using appropriate test targets and calibration sources but achieving and maintaining registration accuracy automatically in the field is a significant challenge. This paper presents an efficient image registration algorithm capable of automatically registering dual waveband image streams upon system start-up and then producing updated transform coefficients during live operation. The algorithm is fully automatic and constrained to ensure reliable operation with minimal or no operator supervision. Robustness to large initial alignment errors is demonstrated using a selection of challenging multimodal image sets. In addition, a novel high performance adaptive image fusion algorithm for maximising fused image quality in the presence of sensor noise is presented.
Image enhancement on the INVIS integrated night vision surveillance and observation system
We present the design and first field trial results of the INVIS integrated night vision surveillance and observation system, in particular for the image enhancement techniques implemented. The INVIS is an all-day-andnight all-weather navigation and surveillance tool, combining three-band cameras. We present a processing pipeline for this system. The image quality of all individual sensor signals is enhanced through Dynamic Noise Reduction and Dynamic Super Resolution. The quality of the thermal image can be enhanced through Scene-Based Non-Uniformity Correction (SBNUC). The images are fused using natural tone mapping techniques. The contrast in the image can be improved using Local Adaptive Contrast Enhancement, applied before or after the tone mapping. These results show that the image enhancement techniques have an added value for image fusion systems.
An efficient color transfer algorithm for recoloring multiband night vision imagery
Guangxin Li, Shuyan Xu, Xin Zhao
A color transfer method is presented to give fused multiband nighttime imagery a natural daytime color appearance in a simple and efficient way. Instead of using the traditional nonlinear lαβ space, the proposed method transfers the color distribution of the target image (daylight color image) to the source image (fused multiband nighttime imagery) in the linear YCBCR color space. The YCBCR transformation is simpler and more suitable for image fusion compared to the lαβ conversion. The YCBCR transformation can be extended into a general formalism. And the paper mathematically proves that, for color transfer, using color spaces conforming to this general YCBCR space framework can produce same recoloring results as using the YCBCR space. Experimental results demonstrate that the YCBCR based color transfer method works surprisingly well for transferring natural color characteristics of daylight color images to false color fused multiband nighttime imagery, and moreover, can also be successfully applied to recoloring a variety of color images.
Correcting hyperstereopsis in a helmet-mounted vision system
When used in conjunction with helmet mounted displays stereo camera views can provide invaluable advantages in, for example, aviation uses. One of the most common setups is to mount cameras to both sides of the pilot's helmet. However, since these cameras posses a larger disparity than the eyes distances to perceived objects are misinterpreted by the pilot. This may cause irritations, even sickness when combined with enhanced displays. Even in the best case the magnified disparity may lead to exaggerated distance estimations. In this paper simple computations are presented that can correct hyperstereopsis "on the fly". With the availability of fast computer hardware carrying out these computations in real time comes into reach. Furthermore, we sketch a series of experiments to evaluate the effectiveness of our approach.
A method for generating enhanced vision displays using OpenGL video texture
Degraded visual conditions can marvel the curious and destroy the unprepared. While navigation instruments are trustworthy companions, true visual reference remains king of the hills. Poor visibility may be overcome via imaging sensors such as low light level charge-coupled-device, infrared, and millimeter wave radar. Enhanced Vision systems combine this imagery into a comprehensive situation awareness display, presented to the pilot as reference imagery on a cockpit display, or as world-conformal imagery on head-up or head-mounted displays. This paper demonstrates that Enhanced Vision imaging can be achieved at video rates using typical CPU / GPU architecture, standard video capture hardware, dynamic non-linear ray tracing algorithms, efficient image transfer methods, and simple OpenGL rendering techniques.
Integration Tools
icon_mobile_dropdown
Surveillance for collision avoidance with integrity using raw measurements in the automatic dependent surveillance-broadcast
Sudha Vana, Maarten Uijt de Haag
This paper discusses an alternative ADS-B implementation that uses available provisions (Mode-S, UAT and GPS receivers) and existing GPS algorithms and techniques. This alternative has many advantages over the current ADS-B implementation, especially with respect to integrity of the solution. The paper will describe the methodology, its advantages, simulation results and implementation issues.
Synthetic observer approach to multispectral sensor resolution assessment
Alan R. Pinkus, David W. Dommett, H. Lee Task
Resolution is often provided as one of the key parameters addressing the quality characteristic of a sensor. One traditional approach when determining the resolution of a sensor/display system is to use a resolution target pattern to detect the smallest element that can be "resolved" using the system. This requires a human in the loop to make the assessment. This study investigated the use of a custom designed software approach to generate an effective resolution value for a sensor. Landolt Cs were selected as the resolution target, which were imaged at multiple distances with different sensors. The images were analyzed using custom software to determine the orientation of the C at each distance, which resulted in a probability of correct orientation detection curve as a function of distance. This curve was used to generate a "resolution" for the sensor without involving human vision. Resolution results for four different spectral band sensors were obtained as well as effective resolution of fused images from select pairs of sensors. These results and the possible use of this synthetic observer resolution approach are presented and discussed, as well as possible future research relating this resolution to human visual performance with fused image sources.
Evaluation of hazard and integrity monitor functions for integrated alerting and notification using a sensor simulation framework
Rajesh Bezawada, Maarten Uijt de Haag
This paper discusses the results of an initial evaluation study of hazard and integrity monitor functions for use with integrated alerting and notification. The Hazard and Integrity Monitor (HIM) (i) allocates information sources within the Integrated Intelligent Flight Deck (IIFD) to required functionality (like conflict detection and avoidance) and determines required performance of these information sources as part of that function; (ii) monitors or evaluates the required performance of the individual information sources and performs consistency checks among various information sources; (iii) integrates the information to establish tracks of potential hazards that can be used for the conflict probes or conflict prediction for various time horizons including the 10, 5, 3, and <3 minutes used in our scenario; (iv) detects and assesses the class of the hazard and provide possible resolutions. The HIM monitors the operation-dependent performance parameters related to the potential hazards in a manner similar to the Required Navigation Performance (RNP). Various HIM concepts have been implemented and evaluated using a previously developed sensor simulator/synthesizer. Within the simulation framework, various inputs to the IIFD and its subsystems are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. The framework and HIM functions are implemented in SimulinkR, a modeling language developed by The MathworksTM. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft.
ALLFlight: multisensor data fusion for helicopter operations
The objective of the project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is to demonstrate and evaluate the characteristics of different sensors for helicopter operations within degraded visual environments, such as brownout or whiteout. The sensor suite, which is mounted onto DLR's research helicopter EC135 consists of standard color or black and white TV cameras, an un-cooled thermal infrared camera (EVS-1000, Max-Viz, USA), an optical radar scanner (HELLAS-W, EADS, Germany) and a millimeter wave radar system (AI-130, ICx Radar Systems, Canada). Data processing is designed and realized by a sophisticated, high performance sensor co-computer (SCC) cluster architecture, which is installed into the helicopter's experimental electronic cargo bay. This paper describes applied methods and the software architecture in terms of real time data acquisition, recording, time stamping and sensor data fusion. First concepts for a pilot HMI are presented as well.
Data-driven visibility enhancement using multi-camera system
Di Wu, Qionghai Dai
In bad weather conditions, with the presence of haze, fog or smoke, atmospheric particles attenuate the direct irradiance from the scene and scatter light to form airlight. Thus, visibility is decreased and may endanger important applications, such as outdoor surveillance or visual navigation for landing and taking off aircrafts. This paper proposes a novel method for visibility enhancement in bad weather conditions based on multi-view camera system. The main advantage of this method lies in the ability to solve ambiguities caused by texture-less, lack of color and contrast, while where most existing methods fail. The proposed system consists of two main components. First is a data-driven approach to extract template priors that are matched with current capturing dynamic scene images. A fixed multi-camera system is utilized to record dynamic scene appearances under different illuminations, in different time, seasons and weather conditions to construct the database which is explored to extract template models containing only static background objects and obtain corresponding scene structures in a data-driven manner. Second is dehazing based on current dynamic scene depth updated by fusing template depth with real-time multi-view stereo matching depth in foreground object regions. The proposed system achieves real-time and robust performances through combinations of data-driven prior extraction and dynamic scene depth optimization. Moreover, estimated weather condition parameters and the real-time reconstructed dynamic scene model are both useful byproducts. We believe that the proposed system is the first to dehaze based on multi-view camera system. An application based on airport surveillance demonstrates its effectiveness.