This paper has briefly identified some application areas that require very reliable and precise estimates of real- time depth information using visual sensors. Amongst them stereoscopy is quite popularly employed in extracting 3-D structure information about imaged objects. In such applications, it was strongly felt the need to evaluate the uncertainty that remains in the stereoscopicinformation. This will be significant for fusing the information with other sensor modalities in multi-sensor systems and also for minimizing the uncertainty to any pre-assigned value, if required. These two approaches have been realized in this paper. In the first approach, partial stereo information obtained from a single camera has been considered for fusion using a generalized camera model and the uncertainty ellipsoid of the same information has been derived. In the second approach a multiple-camera (or multiple-baseline) stereo system has been considered where the correctness and precision of multi-baseline stereo matching has been improved by the application of fusion concepts. The improvement of the trade-off between precision and ambiguity through fusion of depth estimates has been illustrated using a particular intensity function for the images. By using fusion a fewer number of images are required in order to obtain the same level of precision.
Sensor fusion is an important technology, which is growing exponentially due to its tremendous application potential. Appropriate fusion technology is needed to be developed specially when a system requires redundant sensors to be used. The more the redundancy in sensors, the more the computational complexity for controlling the system and the more is its intelligence level. This research presents a strategy developed for multiple sensor fusion, based on geometric optimization. Each sensor's uncertainty model has been developed. Using Lagrangian optimization techniques the individual sensor's uncertainty has been fused to reduce the overall uncertainty to generate a consensus among the sensors regarding their acceptable values. Using fission-fusion architecture, the precision level has further been improved. Subsequently, using feed back from the fused sensory information, the net error has further been minimized to any pre assigned value by developing a fusion technique in the differential domain (FDD). The techniques have been illustrated using synthesized data from two types of sensors (optical encoder and a single camera vision sensor). The application experience of the same fusion strategy in improving the precision of correctness of stereo matching using multiple baselines has also been discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.