Share Email Print

Proceedings Paper

Neural-network-based depth computation for blind navigation
Author(s): Farrah Wong; Ramachandran R. Nagarajan; Sazali Yaacob
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and 1 output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.

Paper Details

Date Published: 16 December 2004
PDF: 8 pages
Proc. SPIE 5606, Two- and Three-Dimensional Vision Systems for Inspection, Control, and Metrology II, (16 December 2004); doi: 10.1117/12.571629
Show Author Affiliations
Farrah Wong, Univ. Malaysia Sabah (Malaysia)
Ramachandran R. Nagarajan, Univ. Malaysia Sabah (Malaysia)
Sazali Yaacob, Univ. Malaysia Sabah (Malaysia)

Published in SPIE Proceedings Vol. 5606:
Two- and Three-Dimensional Vision Systems for Inspection, Control, and Metrology II
Kevin G. Harding, Editor(s)

© SPIE. Terms of Use
Back to Top