Paper
22 March 2001 Equivalency of Bayesian multisensor information fusion and neural networks
Author Affiliations +
Abstract
This paper proposes a Bayesian multi-sensor object localization approach that keeps track of the observability of the sensors in order to maximize the accuracy of the final decision. This is accomplished by adaptively monitoring the mean-square-error of the results of the localization system. Knowledge of this error and the distribution of the system's object localization estimates allow the result of each sensor to be scaled and combined in an optimal Bayesian sense. It is shown that under conditions of normality, the Bayesian sensor fusion approach is directly equivalent to a single layer neural network with a sigmoidal non-linearity. Furthermore, spatial and temporal feedback in the neural networks can be used to compensate for practical difficulties such as the spatial dependencies of adjacent positions. Experimental results using 10 binary microphone arrays yield an order of magnitude improvement in localization error for the proposed approach when compared to previous techniques.
© (2001) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Parham Aarabi "Equivalency of Bayesian multisensor information fusion and neural networks", Proc. SPIE 4385, Sensor Fusion: Architectures, Algorithms, and Applications V, (22 March 2001); https://doi.org/10.1117/12.421126
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Neural networks

Environmental sensing

Neurons

Signal to noise ratio

Sensor fusion

Binary data

Back to Top