Proceedings Volume 9087

Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions 2014

cover
Proceedings Volume 9087

Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions 2014

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 8 July 2014
Contents: 7 Sessions, 18 Papers, 0 Presentations
Conference: SPIE Defense + Security 2014
Volume Number: 9087

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9087
  • DVE Sensors I
  • DVE Sensors II
  • DAS/Panoramic Vision Systems
  • Systems Evaluation and Metrics
  • Synthetic Vision, Symbology, and Cueing
  • Synthetic Vision, Symbology, and Cueing II
Front Matter: Volume 9087
icon_mobile_dropdown
Front Matter: Volume 9087
This PDF file contains the front matter associated with SPIE Proceedings Volume 9087, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
DVE Sensors I
icon_mobile_dropdown
3D surface imaging through visual obscurants using a sub-THz radar
Jason Fritz, Albin J. Gasiewski, Kun Zhang
Due to the lack of high-power sources along with strong electromagnetic absorption by water vapor at frequencies between ~100 GHz and ~10 THz, there are very few radar systems, or any other systems for that matter, operating in this region of the spectrum. For this reason, it is sometimes referred to as the terahertz gap. Source technology, however, is improving, thus facilitating radar systems operating in this new frontier of the electromagnetic spectrum. At the lower end of this spectral region near the millimeter/submillimeter transition, components are more readily available and atmospheric attenuation is moderate in comparison to higher frequencies. Utilizing components that can generate on the order of 50 mW of power, a real aperture radar for imaging surfaces up to several hundred meters has been developed. The goal of this research is to determine if this frequency can provide adequate 3D surface imaging through Degraded Visual Environments (DVEs) yet consume less volume than existing systems at 94 GHz. Transmitting a vertically oriented fan beam to scan the Field of View (FOV) in azimuth and receiving at two vertically, displaced locations with identical fan beams forming an interferometer, three dimensional images of the surface topography (in range, azimuth and height) can be generated. This paper describes the design of the prototype system and presents initial results.
Overview of the commercial OPAL LiDAR optimized for rotorcraft platforms operating in degraded visual environments
Philip Church, Kiatchai Borribanbunpotkat, Evan Trickey, et al.
Neptec has developed a family of obscurant-penetrating 3D laser scanners called OPAL 2.0 that are being adapted for rotorcraft platforms. Neptec and Boeing have been working on an integrated system utilizing the OPAL LiDAR to support operations in degraded visual environments. OPAL scanners incorporate Neptec’s patented obscurantpenetrating LiDAR technology which was extensively tested in controlled dust environments and helicopters for brownout mitigation. The OPAL uses a scanning mechanism based on the Risley prism pair. Data acquisition rates can go as high as 200kHz for ranges within 200m and 25kHz for ranges exceeding 200m. The scan patterns are created by the rotation of two prisms under independent motor control. The geometry and material properties of the prisms will define the conical field-of-view of the sensor, which can be set up to 120 degrees. Through detailed simulations and analysis of mission profiles, the system can be tailored for applications to rotorcrafts. Examples of scan patterns and control schemes based on these simulations will be provided along with data density predictions versus acquisition time for applicable DVE scenarios. Preliminary 3D data acquired in clear and obscurant conditions will be presented.
3D flash LIDAR vision systems for imaging in degraded visual environments
Thomas E. Laux, Chao-I Chen
This paper presents an imaging approach and sample data for brown-out landing, “zero-zero” fog and smoke and into water DVE environments using 3D Flash LIDAR Vision Systems.
DVE Sensors II
icon_mobile_dropdown
Imaging through obscurants with a heterodyne detection-based ladar system
Randy R. Reibel, Peter A. Roos, Brant M. Kaylor, et al.
Bridger Photonics has been researching and developing a ladar system based on heterodyne detection for imaging through brownout and other DVEs. There are several advantages that an FMCW ladar system provides compared to direct detect pulsed time-of-flight systems including: 1) Higher average powers, 2) Single photon sensitive while remaining tolerant to strong return signals, 3) Doppler sensitivity for clutter removal, and 4) More flexible system for sensing during various stages of flight. In this paper, we provide a review of our sensor, discuss lessons learned during various DVE tests, and show our latest 3D imagery.
Three-dimensional landing zone joint capability technology demonstration
James Savage, Shawn Goodrich, Carl Ott, et al.
The Three-Dimensional Landing Zone (3D-LZ) Joint Capability Technology Demonstration (JCTD) is a 27-month program to develop an integrated LADAR and FLIR capability upgrade for USAF Combat Search and Rescue HH-60G Pave Hawk helicopters through a retrofit of current Raytheon AN/AAQ-29 turret systems. The 3D-LZ JCTD builds upon a history of technology programs using high-resolution, imaging LADAR to address rotorcraft cruise, approach to landing, landing, and take-off in degraded visual environments with emphasis on brownout, cable warning and obstacle avoidance, and avoidance of controlled flight into terrain. This paper summarizes ladar development, flight test milestones, and plans for a final flight test demonstration and Military Utility Assessment in 2014.
REVS: a radar-based enhanced vision system for degraded visual environments
Alexander Brailovsky, Justin Bode, Pete Cariani, et al.
Sierra Nevada Corporation (SNC) has developed an enhanced vision system utilizing fast-scanning 94 GHz radar technology to provide three-dimensional measurements of an aircraft’s forward external scene topography. This threedimensional data is rendered as terrain imagery, from the pilot’s perspective, on a Head-Up Display (HUD). The image provides the requisite “enhanced vision” to continue a safe approach along the flight path below the Decision Height (DH) in Instrument Meteorological Conditions (IMC) that would otherwise be cause for a missed approach. Terrain imagery is optionally fused with digital elevation model (DEM) data of terrain outside the radar field of view, giving the pilot additional situational awareness. Flight tests conducted in 2013 show that REVS™ has sufficient resolution and sensitivity performance to allow identification of requisite visual references well above decision height in dense fog. This paper provides an overview of the Enhanced Flight Vision System (EFVS) concept, of the technology underlying REVS, and a detailed discussion of the flight test results.
System modelling of a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments
Colin D. Cameron, Rupert N. Anderton, James G. Burnett, et al.
This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for military forward operating base security in degraded visual environments. A simple end-to-end model of such an imager is described, including a simple scene model based on transformations applied to visible and infrared imagery, optical aberrations, focal plane sampling, scan conversion, receiver performance and image processing algorithms. The use of such a model as a design tool is discussed, especially with regard to optimizing scan conversion and image processing algorithms. The expected performance of the latest imager design is predicted.
DAS/Panoramic Vision Systems
icon_mobile_dropdown
HALO: a reconfigurable image enhancement and multisensor fusion system
F. Wu, D. L. Hickman, Steve J. Parker
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
DVE: ground and airborne visualization functionalities
Nick Barratt, Olegs Mise, Dustin Franklin, et al.
This paper describes functional blocks (hardware and software functionalities) applicable to several forms of indirect vision enhancement in DVE (Degraded Vision Environment for pilotage, Driver’s Vision Enhancement for ground vehicle Situational Awareness). These functionalities are the result of the increased processing power of General Purpose Graphics Processing Units (GPGPUs) and improvements in mosaic stitch processing, image fusion and analytics of both live and synthetic imagery. We deploy GPUs into low-latency embedded systems with decreased SWaP (Size, Weight and Power) and high-bandwidth interconnectivity via RDMA (Remote Direct Memory Access).
Systems Evaluation and Metrics
icon_mobile_dropdown
Degraded visual environment image/video quality metrics
Dustin D. Baumgartner, Jeremy B. Brown, Eddie L. Jacobs, et al.
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud.

We have developed and used a variety of IQMs and VQMs related to the pilot’s ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
External Vision Systems (XVS) proof-of-concept flight test evaluation
Kevin J. Shelton, Steven P. Williams, Lynda J. Kramer, et al.
NASA’s Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today’s aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley’s UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight – one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.
Visual advantage of enhanced flight vision system during NextGen flight test evaluation
Lynda J. Kramer, Stephanie J. Harrison, Randall E. Bailey, et al.
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA’s Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities.

Nine test flights were flown in Gulfstream’s G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast).

Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.
Synthetic Vision, Symbology, and Cueing
icon_mobile_dropdown
Sensor-enhanced 3D conformal cueing for safe and reliable HC operation in DVE in all flight phases
Thomas Münsterer, Tobias Schafhitzel, Michael Strobel, et al.
Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery. Each of these conditions reduce the pilots’ ability to perceive visual cues in the outside world reducing his performance and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The basis for the presented solution is a fusion of processed and classified high resolution ladar data with database information having a potential to also include other sensor data like forward looking or 360° radar data. This paper reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology sets and different possibilities for the pilots to select specific support functions. Several functionalities have been implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by the pilot to match the relevant flight envelope and outside visual conditions.
Visual-conformal display format for helicopter guidance
Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to “see” through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. “Situational awareness” of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot’s head orientation relative to the aircraft reference frame. Together with the aircraft’s position and orientation relative to the world’s reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, “visual-conformal”. Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.
Synthetic vision meets ARINC 661: feasibility study of the integration of terrain visualization in ARINC 661 avionic displays
Erik Lipinski, L. Ebrecht
ARINC 661-Cockpit Display System Interfaces to User Systems represents an evolving standard for the next generation of avionic systems. The standard defines a client-server architecture including user applications which control layers in certain windows of a cockpit display system (CDS). When creating avionic displays for a CDS, one can use a predefined set of simple to complex widgets. These are very useful when designing and implementing avionic displays. However, a proper widget and concept enabling synthetic vision, e.g. for terrain visualization, is not provided by the standard. Due to the fact that synthetic vision systems become more and more popular there is the need enabling synthetic vision with ARINC 661. This contribution deals with the question how synthetic vision (SV) might be realized, e.g. in an ARINC 661 compliant primary flight display (PFD) or navigation display (ND). Hence, different approaches for the implementation of SV will be discussed. One approach has been implemented to perform the feasibility study. A first study was done using the open source software project j661 managed by Dassault Aviation. Secondly, another implementation of a SV PFD was done using the SCADE ARINC 661 tools provided by ESTEREL Technologies. XPlane was used as terrain rendering application. The paper will give some rough figures of the programmed SV PFD as well as it will present the results of the feasibility study.
Synthetic Vision, Symbology, and Cueing II
icon_mobile_dropdown
Identifying opportune landing sites in degraded visual environments with terrain and cultural databases
Marc Moody, Robert Fisher, J. Kristin Little
Boeing has developed a degraded visual environment navigational aid that is flying on the Boeing AH-6 light attack helicopter. The navigational aid is a two dimensional software digital map underlay generated by the Boeing™ Geospatial Embedded Mapping Software (GEMS) and fully integrated with the operational flight program. The page format on the aircraft’s multi function displays (MFD) is termed the Approach page. The existing work utilizes Digital Terrain Elevation Data (DTED) and OpenGL ES 2.0 graphics capabilities to compute the pertinent graphics underlay entirely on the graphics processor unit (GPU) within the AH-6 mission computer. The next release will incorporate cultural databases containing Digital Vertical Obstructions (DVO) to warn the crew of towers, buildings, and power lines when choosing an opportune landing site. Future IRAD will include Light Detection and Ranging (LIDAR) point cloud generating sensors to provide 2D and 3D synthetic vision on the final approach to the landing zone. Collision detection with respect to terrain, cultural, and point cloud datasets may be used to further augment the crew warning system. The techniques for creating the digital map underlay leverage the GPU almost entirely, making this solution viable on most embedded mission computing systems with an OpenGL ES 2.0 capable GPU. This paper focuses on the AH-6 crew interface process for determining a landing zone and flying the aircraft to it.
Detection of helicopter landing sites in unprepared terrain
The primary usefulness of helicopters shows in missions where regular aircraft cannot be used, especially HEMS (Helicopter Emergency Medical Services). This might be due to requirements for landing in unprepared areas without dedicated runway structures, and an extended exibility to y to more than one previously unprepared target. One example of such missions are search and rescue operations. An important task of such a mission is to locate a proper landing spot near the mission target. Usually, the pilot would have to evaluate possible landing sites by himself, which can be time-intensive, fuel-costly, and generally impossible when operating in degraded visual environments. We present a method for pre-selecting a list of possible landing sites. After specifying the intended size, orientation and geometry of the site, a choice of possibilities is presented to the pilot that can be ordered by means of wind direction, terrain constraints like maximal slope and roughness, and proximity to a mission target. The possible choices are calculated automatically either from a pre-existing terrain data base, or from sensor data collected during earlier missions, e.g., by collecting data with radar or laser sensors. Additional data like water-body maps and topological information can be taken into account to avoid landing in dangerous areas under adverse view conditions. In case of an emergency turnaround the list can be re-ordered to present alternative sites to the pilot. We outline the principle algorithm for selecting possible landing sites, and we present examples of calculated lists.