Share Email Print

Proceedings Paper

A depth estimation framework based on unsupervised learning and cross-modal translation
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In recent years, with the vigorous development of artificial intelligence and autonomous driving technology, the importance of scene perception technology is increasing. Unsupervised deep learning based methods have demonstrated a certain level of robustness and accuracy in some challenging scenes. By inferring depth from a single input image without any ground truth label, a lot of time and resources can be saved. However, unsupervised depth estimation has defects in robustness and accuracy under complex environment which could be improved by modifying network structure and incorporating other modal information. In this paper, we propose an unsupervised, monocular depth estimation network achieving high speed and accuracy, and a learning framework with our depth estimation network to improve depth performance by incorporating transformed images across different modalities. The depth estimator is an encoder-decoder network to generate the multi-scale dense depth map. The sub-pixel convolutional layer is adopted to obtain depth super-resolution by replacing the up-sample branches. The cross-modal depth estimation using near-infrared image and RGB image enhances the performance of depth estimation than pure RGB image. The training mode is to transfer both images to the same modality and then carry out super-resolved depth estimation for each stereo camera pair. Compared with the initial results of depth estimation using only RGB images, the experiment verifies that our depth estimation network with the cross-modal fusion system designed in this paper achieves better performance on public datasets and a multi-modal dataset collected by our stereo vision sensor.

Paper Details

Date Published: 23 October 2019
PDF: 11 pages
Proc. SPIE 11158, Target and Background Signatures V, 1115807 (23 October 2019); doi: 10.1117/12.2532666
Show Author Affiliations
Jiafeng Shen, Zhejiang Univ. (China)
Kaiwei Wang, Zhejiang Univ. (China)
Kailun Yang, Zhejiang Univ. (China)
Kaite Xiang, Zhejiang Univ. (China)
Lei Fei, Zhejiang Univ. (China)
Xinxin Hu, Zhejiang Univ. (China)
Huabing Li, Zhejiang Univ. (China)
Hao Chen, Zhejiang Univ. (China)

Published in SPIE Proceedings Vol. 11158:
Target and Background Signatures V
Karin U. Stein; Ric Schleijpen, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?