Share Email Print
cover

Proceedings Paper

Image-based 3D scene analysis for navigation of autonomous airborne systems
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic description of image data. Knowledge of the approximated sensor pose measured for example by IMU or GPS enables to estimate an expected model projection used for solving the correspondence problem of image structures and model elements. These correspondences are presupposed for pose computation carried out by nonlinear numerical optimization algorithms. We demonstrate the efficiency of the proposed method by navigation update approaching a bridge scenario and flying over urban area, whereas data were taken with airborne infrared sensors in high oblique view. In doing so we simulated image-based navigation for target engagement and midcourse guidance suited for the concepts of future autonomous systems like missiles and drones.

Paper Details

Date Published: 5 October 2001
PDF: 11 pages
Proc. SPIE 4572, Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision, (5 October 2001); doi: 10.1117/12.444197
Show Author Affiliations
Klaus Jaeger, FGAN-Forschungsinstitut fuer Optronik und Mustererkennung (Germany)
Karl-Heinz Bers, FGAN-Forschungsinstitut fuer Optronik und Mustererkennung (Germany)


Published in SPIE Proceedings Vol. 4572:
Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision
David P. Casasent; Ernest L. Hall, Editor(s)

© SPIE. Terms of Use
Back to Top