Share Email Print
cover

Proceedings Paper

Efficient 3D data fusion for object reconstruction using neural networks
Author(s): Mostafa G. H. Mostafa; Sameh M. Yamany; Aly A. Farag
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper presents a framework for integrating multiple sensory data, sparse range data and dense depth maps from shape from shading in order to improve the 3D reconstruction of visible surfaces of 3D objects. The integration process is based on propagating the error difference between the two data sets by fitting a surface to that difference and using it to correct the visible surface obtained from shape from shading. A feedforward neural network is used to fit a surface to the sparse data. We also study the use of the extended Kalman filter for supervised learning and compare it with the backpropagation algorithm. A performance analysis is done to obtain the best neural network architecture and learning algorithm. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained from shape from shading in terms of metric measurements.

Paper Details

Date Published: 9 March 1999
PDF: 10 pages
Proc. SPIE 3647, Applications of Artificial Neural Networks in Image Processing IV, (9 March 1999); doi: 10.1117/12.341108
Show Author Affiliations
Mostafa G. H. Mostafa, Univ. of Louisville (United States)
Sameh M. Yamany, Univ. of Louisville (United States)
Aly A. Farag, Univ. of Louisville (United States)


Published in SPIE Proceedings Vol. 3647:
Applications of Artificial Neural Networks in Image Processing IV
Nasser M. Nasrabadi; Aggelos K. Katsaggelos, Editor(s)

© SPIE. Terms of Use
Back to Top