Share Email Print

Proceedings Paper

Three-dimensional scene reconstruction from a two-dimensional image
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.

Paper Details

Date Published: 1 May 2017
PDF: 7 pages
Proc. SPIE 10199, Geospatial Informatics, Fusion, and Motion Video Analytics VII, 1019909 (1 May 2017); doi: 10.1117/12.2266411
Show Author Affiliations
Franz Parkins, The Univ. of Memphis (United States)
Eddie Jacobs, The Univ. of Memphis (United States)

Published in SPIE Proceedings Vol. 10199:
Geospatial Informatics, Fusion, and Motion Video Analytics VII
Kannappan Palaniappan; Peter J. Doucette; Gunasekaran Seetharaman; Anthony Stefanidis, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?