Share Email Print
cover

Proceedings Paper

A study on depth map generation using a light field camera and a monocular RGB camera based on deep learning
Author(s): Makoto Takamatsu; Makoto Hasegawa
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A depth map and an RGB image taken by a light field camera for training data are arranged in a dataset pair; the datasets are learnt through a deep learning method called pix2pix, which is a type of conditional generative adversarial network. We can generate depth maps using only a monocular mobile camera without the light field camera based on our proposed method. Low accuracy on depth is a technical issue for the light field camera; however, the proposed method improves the depth accuracy due to the generalization ability of neural networks.

Paper Details

Date Published: 16 October 2019
PDF: 6 pages
Proc. SPIE 11205, Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019), 112050T (16 October 2019); doi: 10.1117/12.2542653
Show Author Affiliations
Makoto Takamatsu, Tokyo Denki Univ. (Japan)
Makoto Hasegawa, Tokyo Denki Univ. (Japan)


Published in SPIE Proceedings Vol. 11205:
Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019)
Anand Asundi; Motoharu Fujigaki; Huimin Xie; Qican Zhang; Song Zhang; Jianguo Zhu; Qian Kemao, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray