Share Email Print
cover

Proceedings Paper

Three-dimensional model reconstruction for treasures of jadeite material from uncalibrated image sequences
Author(s): Chia-Ming Cheng; Shu-Fan Wang; Chin-Hung Teng; Po-Hao Huang; Yu-Chieh Chien; Shang-Hong Lai
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Three-dimensional digital preservation of historical treasure has become a major focus of research in computer vision and graphics recently. It possesses the advantages of invariant preservation, remote display, ease of browsing and study, 3D model copy, etc. It is particularly important for digital library systems that have been successfully established in many countries. Furthermore, there have been some pioneering researches on preserving cultural and historical relics, e.g. famous pictures, stone carvings, and well-known architectures and landscape. There are many priceless Chinese treasures of jadeite material, but existing 3D scanning techniques are unable to be applied to such curio because of the semi-transparent and reflective material properties as well as the safety consideration. In this paper, we present a novel semi-automatic system to reconstruct three-dimensional models of jadeite material from image sequences. There are two major challenges from the 3D model reconstruction for treasures of jadeite material from uncalibrated image sequences. The first is the semi-diaphaneity as well as the highly specular property of jadeite materials and the other is the unknown camera information from given image sequences, including intrinsic (calibration) and extrinsic (position and orientation) parameters. The proposed modeling process first recovers the camera information and the rough structure through a structure from motion algorithm, and then further extracts the fine details of the model from dense correspondences between image patches. We have developed three techniques for this challenging task, including structure from motion algorithm, image registration, and dense depth computation First of all, for the highly specular material, we manually select some corresponding feature points between adjacent images, because it is very difficult to reliably establish the correspondences from the image sequences. These established correspondences supply the information to recover the camera parameters and the initial guess for the dense matching of the image patches. For the structure from motion algorithm, it consists of two steps; the first step is the projective reconstruction and the second step is the self-calibration and the metric update. Considering the high feature missing rate due to the highly specular material property, we proposed a robust method for projective reconstruction to recover the missing points, which greatly reduces the traditional error accumulation problem. The self-calibration and metric update process makes use of the image acquisition assumptions to obtain the camera parameters. It iteratively performs the following two steps; the first is the closed-form solution from the linear constraints on camera calibration matrix based on absolute conic, and the second is an optimization process to fit the nonlinear constraints. Then the obtained solution offers the initial guess for the strategic bundle adjustment algorithm. As to the image registration, existing techniques failed due to the complex lighting effect on jadeite material. By including the brightness variation factors into the model and considering the reflective highlight effect, we developed a novel optical flow computation technique to reliably compute the dense matching through the image patches. Based on the extracted camera information and registered image patches, the dense depth information of the jadeite object can be computed. And we can refine the original rough model by the supplied dense depth information through subdivision and adaptation of the 3D rough mesh model. Finally, experimental results of 3D model reconstruction from the image sequence of the Chinese treasure, Jadeite Cabbage with Insects, are shown to demonstrate the performance of the developed system.

Paper Details

Date Published: 17 January 2005
PDF: 12 pages
Proc. SPIE 5665, Videometrics VIII, 56650X (17 January 2005); doi: 10.1117/12.587450
Show Author Affiliations
Chia-Ming Cheng, National Tsing Hua Univ. (Taiwan)
Shu-Fan Wang, National Tsing Hua Univ. (Taiwan)
Chin-Hung Teng, National Tsing Hua Univ. (Taiwan)
Po-Hao Huang, National Tsing Hua Univ. (Taiwan)
Yu-Chieh Chien, National Tsing Hua Univ. (Taiwan)
Shang-Hong Lai, National Tsing Hua Univ. (Taiwan)


Published in SPIE Proceedings Vol. 5665:
Videometrics VIII
J.-Angelo Beraldin; Sabry F. El-Hakim; Armin Gruen; James S. Walton, Editor(s)

© SPIE. Terms of Use
Back to Top