Paper
16 December 2004 Neural-network-based depth computation for blind navigation
Author Affiliations +
Abstract
A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and 1 output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Farrah Wong, Ramachandran R. Nagarajan, and Sazali Yaacob "Neural-network-based depth computation for blind navigation", Proc. SPIE 5606, Two- and Three-Dimensional Vision Systems for Inspection, Control, and Metrology II, (16 December 2004); https://doi.org/10.1117/12.571629
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Imaging systems

Neural networks

Calibration

Image processing

Neurons

Image segmentation

Back to Top