Share Email Print
cover

Proceedings Paper

Texture mapping 3D models of indoor environments with noisy camera poses
Author(s): Peter Cheng; Michael Anderson; Stewart He; Avideh Zakhor
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Automated 3D modeling of building interiors is used in applications such as virtual reality and environment mapping. Texturing these models allows for photo-realistic visualizations of the data collected by such modeling systems. While data acquisition times for mobile mapping systems are considerably shorter than for static ones, their recovered camera poses often suffer from inaccuracies, resulting in visible discontinuities when successive images are projected onto a surface for texturing. We present a method for texture mapping models of indoor environments that starts by selecting images whose camera poses are well-aligned in two dimensions. We then align images to geometry as well as to each other, producing visually consistent textures even in the presence of inaccurate surface geometry and noisy camera poses. Images are then composited into a final texture mosaic and projected onto surface geometry for visualization. The effectiveness of the proposed method is demonstrated on a number of different indoor environments.

Paper Details

Date Published: 7 March 2014
PDF: 15 pages
Proc. SPIE 9020, Computational Imaging XII, 90200V (7 March 2014); doi: 10.1117/12.2038496
Show Author Affiliations
Peter Cheng, Univ. of California, Berkeley (United States)
Michael Anderson, Univ. of California, Berkeley (United States)
Stewart He, Univ. of California, Davis (United States)
Avideh Zakhor, Univ. of California, Berkeley (United States)


Published in SPIE Proceedings Vol. 9020:
Computational Imaging XII
Charles A. Bouman; Ken D. Sauer, Editor(s)

© SPIE. Terms of Use
Back to Top