This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
This paper describes a strategy for accurate robot calibration using close range photogrammetry. A 5-DoF robot has been designed for placement of two web cameras relative to an object. To ensure correct camera positioning, the robot is calibrated using the following strategy. First, a Denavit-Hartenberg method is used to generate a general kinematic robot model. A set of reference frames are defined relative to each joint and each of the cameras, transformation matrices are then produced to represent change in position and orientation between frames in terms of joint positions and unknown parameters. The complete model is extracted by multiplying these matrices. Second, photogrammetry is used to estimate the postures of both cameras. A set of images are captured of a calibration fixture from different robot poses. The camera postures are then estimated using bundle adjustment. Third, the kinematic parameters are estimated using weighted least squares. For each pose a set of equations are extracted from the model and the unknown parameters are estimated in an iterative procedure. Finally these values are substituted back into the original model. This final model is tested using forward kinematics by comparing the model’s predicted camera postures for given joint positions to the values obtained through photogrammetry. Inverse kinematics is performed using both least squares and particle swarm optimisation and these techniques are contrasted. Results demonstrate that this photogrammetry approach produces a reliable and accurate model of the robot that can be used with both least squares and particle swarm optimisation for robot control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.