Share Email Print

Proceedings Paper

Processing system for an enhanced vision system
Author(s): Dennis J. Yelton; Ken L. Bernier; John N. Sanders-Reed
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Enhanced Vision Systems (EVS) combines imagery from multiple sensors, possibly running at different frame rates and pixel counts, on to a display. In the case of a Helmet Mounted Display (HMD), the user line of sight is continuously changing with the result that the sensor pixels rendered on the display are changing in real time. In an EVS, the various sensors provide overlapping fields of view which requires stitching imagery together to provide a seamless mosaic to the user. Further, different modality sensors may be present requiring the fusion of imagery from the sensors. All of this takes place in a dynamic flight environment where the aircraft (with fixed mounted sensors) is changing position and orientation while the users are independently changing their lines of sight. In order to provide well registered, seamless imagery, very low throughput latencies are required, while dealing with huge volumes of data. This provides both algorithmic and processing challenges which must be overcome to provide a suitable system. This paper discusses system architecture, efficient stitching and fusing algorithms, and hardware implementation issues.

Paper Details

Date Published: 11 August 2004
PDF: 14 pages
Proc. SPIE 5424, Enhanced and Synthetic Vision 2004, (11 August 2004); doi: 10.1117/12.537777
Show Author Affiliations
Dennis J. Yelton, Boeing Co. (United States)
Ken L. Bernier, Boeing Co. (United States)
John N. Sanders-Reed, Boeing Co. (United States)

Published in SPIE Proceedings Vol. 5424:
Enhanced and Synthetic Vision 2004
Jacques G. Verly, Editor(s)

© SPIE. Terms of Use
Back to Top