Paper
26 September 1997 Best viewpoints for active vision classification and pose estimation
Michael A. Sipe, David P. Casasent
Author Affiliations +
Abstract
We advance new active computer vision algorithms that classify objects and estimate their pose from intensity images. Our algorithms automatically reposition the sensor if the class or pose of an object is ambiguous in a given image and incorporate data from multiple object views in determining the final object classification. A feature space trajectory (FST) in a global eigenfeature space is used to represent 3-D distorted views of an object. Assuming that an observed feature vector consists of Gaussian noise added to a point on the FST, we derive a probability density function (PDF) for the observation conditioned on the class and pose of the object. Bayesian estimation and hypothesis testing theory are then used to derive approximations to the maximum a posteriori probability pose estimate and the minimum probability of error classifier. New confidence measures for the class and pose estimates, derived using Bayes theory, determine when additional observations are required as well as where the sensor should be positioned to provide the most useful information.
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Michael A. Sipe and David P. Casasent "Best viewpoints for active vision classification and pose estimation", Proc. SPIE 3208, Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling, (26 September 1997); https://doi.org/10.1117/12.290309
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Active vision

Probability theory

Error analysis

Sensors

3D vision

Analytical research

Computer vision technology

Back to Top