This paper describes a strategy for accurate robot calibration using close range photogrammetry. A 5-DoF robot has been designed for placement of two web cameras relative to an object. To ensure correct camera positioning, the robot is calibrated using the following strategy. First, a Denavit-Hartenberg method is used to generate a general kinematic robot model. A set of reference frames are defined relative to each joint and each of the cameras, transformation matrices are then produced to represent change in position and orientation between frames in terms of joint positions and unknown parameters. The complete model is extracted by multiplying these matrices. Second, photogrammetry is used to estimate the postures of both cameras. A set of images are captured of a calibration fixture from different robot poses. The camera postures are then estimated using bundle adjustment. Third, the kinematic parameters are estimated using weighted least squares. For each pose a set of equations are extracted from the model and the unknown parameters are estimated in an iterative procedure. Finally these values are substituted back into the original model. This final model is tested using forward kinematics by comparing the model’s predicted camera postures for given joint positions to the values obtained through photogrammetry. Inverse kinematics is performed using both least squares and particle swarm optimisation and these techniques are contrasted. Results demonstrate that this photogrammetry approach produces a reliable and accurate model of the robot that can be used with both least squares and particle swarm optimisation for robot control.
Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.
Current research conducted by the Institute for Photogrammetry at the Universitaet Stuttgart aims at the determination of
a cylinder head's pose by using a single monochromatic camera. The work is related to the industrial project RoboMAP,
where the recognition's result will be used as initiating information for a robot to position other sensors over the cylinder
head. For this purpose a commercially available view-based algorithm is applied, which itself needs the object's
geometry as a-priori information. We describe the general functionality of the approach and present the results of our
latest experiments. The results we achieved show that the accuracy as well as the processing time suite the project's
requirements very well, if the image acquisition is prepared properly.
We propose a point-based environment model (PEM) to represent the
absolute coordinate frame in which camera motion is to be tracked.
The PEM can easily be acquired by laser scanning both indoors and
outdoors even over long distances. The approach avoids any expensive
modeling step and instead uses the raw point data for scene representation.
Also the approach requires no additional artificial markers or active
components as orientation cues. Using intensity feature detection
techniques key points are automatically extracted from the PEM and
tracked across the image sequence. The orientation procedure of the
imaging sensor is solely based on spatial resection.
This paper aims at developing a model-based system for the object recognition of three-dimensional objects with curved surfaces using range images. The model data is represented using a CAD-model, providing a mathematical precise and reliable description of arbitrary shapes. The proposed method is based on model-based
range image segmentation, using curvature as invariant features. By integrating model information into the segmentation stage, the segmentation process is guided to provide a partitioning corresponding to that of the CAD-model. The work provides a way to detect objects in arbitrary positions and derive the transformation
onto a CAD-model. Thereby it contributes to the development of automated systems in the areas of inspection, manufacturing and robotics.
We address the problem of automating the processing of dense range data, specifically the automated interpretation of such data containing curved surfaces. This is a crucial step in the automated processing of range data for applications in object recognition, measurement, re-engineering and modeling. We propose a two stage process using model-based curvature classification as the first step. Features based on differential geometry, mainly curvature features, are ideally suited for processing objects of arbitrary shape including of course curved surfaces. The second stage uses a modified region growing algorithm to perform the final segmentation. The results of the proposed approach are demonstrated on different range data sets.
We present our work on the implementation and calibration of a multisensor measuring system. The work is part of a large scale research project on optical measurement using sensor actuator coupling and active exploration. This project is a collaboration of researchers form seven institutes of the University of Stuttgart including photogrammetry, mechanical engineering and computer science. The system consist of optical sensors which can be manipulated in position and orientation by robot actuators, and light sources which control illumination. The system performs different tasks including object recognition, localization and gauging. Flexibility is achieved by replacing the common serial measurement chain by nested control loops involving autonomous agents which perform basic tasks in a modular fashion. The system is able to inspect and gauge several parts from a set of parts sorted in a 3D model database. The paper gives an overview of the entire system and details some of the photogrammetry-related aspects such as the calibration of the different sensors, the calibration of the measurement robot using photogrammetric measurements, as well as data processing steps like segmentation, object pose determination, and gauging.
This work focuses on the extraction of features from dense range images for object recognition. The object recognition process is based on a CAD model of the subject. Curvature information derived from the CAD model is used to support the feature extraction process. We perform a curvature based classification of the range image to achieve a segmentation into meaningful surface patches, which are later to be matched with the surfaces of the CAD model.
Presently there is a growing demand for fast and precise 3D computer vision systems for a wide variety of industrial applications like reverse engineering, quality control and industrial gauging. One important aspect of any vision system is the data acquisition. If the principle of triangulation is used the correspondence problem is to be solved. The coded light approach offers a fast way to overcome this problem and to provide dense range data. In order to get high accuracy range images the system needs to be calibrated. In this paper, we compare two calibration techniques: polynomial depth calibration and photogrammetric calibration. We have carried out both methods independently. To obtain results about the accuracy in object space, we measured the surface of a plane- table.
A research group at the University of Stuttgart has set up an experimental measurement robot for industrial close range inspection. During a test run, the feasibility of a multi- sensor/actor system has been shown. The system uses optical sensors to perform different tasks including object recognition, localization and gauging. It is a step towards systems which are able to inspect and gauge several parts from a set of parts stored in a 3D model database. This paper describes the results which have been obtained so far and were demonstrated during a test run. It then focuses on our latest developments concerning 3D data acquisition, registration, segmentation, model generation from CAD data and object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.