Share Email Print
cover

Proceedings Paper

Data fusion for 3D object reconstruction
Author(s): Mostafa G. H. Mostafa; Sameh M. Yamany; Aly A. Farag
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Recently multisensor data fusion has proven its necessity for computer vision and robotics applications. 3D scene reconstruction and model building have been greatly improved in systems that employ multiple sensors and/or multiple cues data fusion/integration. In this paper, we present a framework for integrating registered multiple sensory data, sparse range data from laser range finders and dense depth maps of shape from shading from intensity images, for improving the 3D reconstruction of visible surfaces of 3D objects. Two methods are used for data integration and surface reconstruction. In the first method, data are integrated using a local error propagation algorithm, which we have developed in this paper. In the second method, the integration process is carried out using a feedforward neural networks with backpropagation learning rule. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained from shape from shading in terms of metric measurements. We also review the current research in the area of multisensor/multicue data fusion for 3D object reconstructions.

Paper Details

Date Published: 9 October 1998
PDF: 12 pages
Proc. SPIE 3523, Sensor Fusion and Decentralized Control in Robotic Systems, (9 October 1998); doi: 10.1117/12.326990
Show Author Affiliations
Mostafa G. H. Mostafa, Univ. of Louisville (United States)
Sameh M. Yamany, Univ. of Louisville (United States)
Aly A. Farag, Univ. of Louisville (United States)


Published in SPIE Proceedings Vol. 3523:
Sensor Fusion and Decentralized Control in Robotic Systems
Paul S. Schenker; Gerard T. McKee, Editor(s)

© SPIE. Terms of Use
Back to Top