Share Email Print

Proceedings Paper

Mesh-based integration of range and color images
Author(s): Yiyong Sun; Christophe Dumont; Mongi A. Abidi
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper discusses the construction of photorealistic 3D models from multisensor data. The data typically comprises multiple views of range and color images to be integrated into a unified 3D model. The integration process uses a mesh-based representation of the range data and the advantages of the mesh-based approach over a volumetric approach are mentioned. First, two meshes, corresponding to range images taken from two different viewpoints, are registered to the same world coordinate system and then integrated. This process is repeated until all views have been integrated. The integration is straightforward unless the two triangle meshes overlap. The overlapped measurements are detected and the less confident triangles are removed based on their distance from and orientation relative to the camera viewpoint. After removing the overlapping patches, the meshes are seamed together to build a single 3D model. The model is incrementally updated after each new viewpoint is integrated. The color images are used as texture in the finished scene model. The results show that the approach is efficient for the integration of large, multimodal data sets.

Paper Details

Date Published: 3 April 2000
PDF: 8 pages
Proc. SPIE 4051, Sensor Fusion: Architectures, Algorithms, and Applications IV, (3 April 2000); doi: 10.1117/12.381624
Show Author Affiliations
Yiyong Sun, Univ. of Tennessee/Knoxville (United States)
Christophe Dumont, Univ. of Tennessee/Knoxville (United States)
Mongi A. Abidi, Univ. of Tennessee/Knoxville (United States)

Published in SPIE Proceedings Vol. 4051:
Sensor Fusion: Architectures, Algorithms, and Applications IV
Belur V. Dasarathy, Editor(s)

© SPIE. Terms of Use
Back to Top