Share Email Print
cover

Proceedings Paper • new

Scene disparity estimation with convolutional neural networks
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Estimation of stereovision disparity maps is important for many applications that require information about objects’ position and geometry. For example, as depth surrogate, disparity maps are essential for objects’ 3D shape reconstruction and indeed other applications that do require three-dimensional representation of a scene. Recently, deep learning (DL) methodology has enabled novel approaches for the disparity estimation with some focus on the real-time processing requirement that is critical for applications in robotics and autonomous navigation. Previously, that constraint was not always addressed. Furthermore, for robust disparity estimation the occlusion effects should be explicitly modelled. In the described method, the effective detection of occlusion regions is achieved through disparity estimation in both, forward and backward correspondence model with two matching deep subnetworks. These two subnetworks are trained jointly in a single training process. Initially the subnetworks are trained using simulated data with the know ground truth, then to improve generalisation properties the whole model is fine-tuned in an unsupervised fashion on real data. During the unsupervised training, the model is equipped with bilinear interpolation warping function to directly measure quality of the correspondence with the disparity maps estimated for both the left and right image. During this phase forward-backward consistency constraint loss function is also applied to regularise the disparity estimators for non-occluding pixels. The described network model computes, at the same time, the forward and backward disparity maps as well as corresponding occlusion masks. It showed improved results on simulated and real images with occluded objects, when compared with the results obtained without using the forward-backward consistency constraint loss function.

Paper Details

Date Published: 21 June 2019
PDF: 9 pages
Proc. SPIE 11059, Multimodal Sensing: Technologies and Applications, 110590T (21 June 2019); doi: 10.1117/12.2527628
Show Author Affiliations
Essa R. Anas, Univ. of Central Lancashire (United Kingdom)
Li Guo, Univ. of Central Lancashire (United Kingdom)
Ahmed Onsy, Univ. of Central Lancashire (United Kingdom)
Bogdan J. Matuszewski, Univ. of Central Lancashire (United Kingdom)


Published in SPIE Proceedings Vol. 11059:
Multimodal Sensing: Technologies and Applications
Ettore Stella, Editor(s)

© SPIE. Terms of Use
Back to Top