Open Access Paper
16 August 2017 Active learning in camera calibration through vision measurement application
Xiaoqin Li, Jierong Guo, Xianchun Wang, Changqing Liu, Binfang Cao
Author Affiliations +
Proceedings Volume 10452, 14th Conference on Education and Training in Optics and Photonics: ETOP 2017; 104524O (2017) https://doi.org/10.1117/12.2269011
Event: 14th Conference on Education and Training in Optics and Photonics, ETOP 2017, 2017, Hangzhou, China
Abstract
Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.

1.

INTRODUCTION

Machine vision is a very important part of the optical measurement course for the students with major of Optoelectronics [1], Hunan University of Science and Arts, who plan to work in optical area. A broad supply of teaching events is offered to teach the most important machine vision camera calibration aspects for senior undergraduate students. It is a good mixture between theoretical lessons, active learning workshops and many practical projects. All theoretical scientific basics are offered and conveyed intensively to learn the camera model and the calibration process. A big challenge is how to use a moveable calibration target to calibrate the camera, which is the practical task.

2.

ACTIVE LEARNING IN MACHINE VISION CAMERA CALIBRATION

A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of machine vision camera calibration. To understand the underlying ideas, the students need a broad theoretical scientific know-how. Practical tests and projects are essential to convey and deepen the important theoretical basics.

The Students need a practical part to develop a better comprehension. The design and development of a suitable learning environment with the required infrastructure is crucial. For the camera calibration project, we use 1351UM, 1280*1024 pixel, 8 bit cameras manufactured by DaHeng Image Company in Beijing. The lens used in the vision system is M0814-MP, with a standard focal length of 16 mm. The students can be confronted with practical problems. By solving practical problems the students can deepen their theoretical scientific background.

3.

THEORETICAL BACKGROUND

To calibrate a camera, a model for the mapping of the three-dimensional (3D) points of the word to the 2D image generated by the camera and lens is necessary [2]. Figure 1 displays the perspective projection performed by a pinhole camera. The world point Pw is projected through the projection center of the lens to the point p in the image plane. Lens distortions cause the point p to lie at a different position instead of on a straight line from Pw through the projection center.

Figure 1.

The perspective projection performed by a pinhole camera

00137_PSISDG10452_104524O_page_2_1.jpg

3.1

ANALYSIS OF THE WORLD COORDINATE AND OF THE CAMERA COORDINATE

The transformation from the world coordinate system to the camera coordinate system is a rigid transformation, i.e., a rotation followed by a translation. The point Pw = (xw, yw, zw) is given by the point Pc = (xc, yc, zc) where

00137_PSISDG10452_104524O_page_3_1.jpg

Here, T = (tx, ty, tz) is a translation vector and R = R(α,β, γ) is a rotation matrix.

00137_PSISDG10452_104524O_page_3_2.jpg

3.2

ANALYSIS OF THE CAMERA COORDINATE AND OF THE IMAGE PLANE COORDINATE

For the pinhole camera, the projection of the 3D point Pc into the image plane coordinate system is a perspective projection, which is given by

00137_PSISDG10452_104524O_page_3_3.jpg

After the projection to the image plane, lens distortions cause the coordinates (u, v) to be modified. This is a transformation that can be modeled in the image plane alone. For most lenses, the distortion can be approximated sufficiently well by radial distortion:

00137_PSISDG10452_104524O_page_3_4.jpg

Here if k is positive, the radial distortion is barrel-shaped; while negative k represents pincushion-shaped.

3.3

ANALYSIS OF THE IMAGE PLANE COORDINATE AND OF THE IMAGE COORDINATE

The point (u, v) is transformed from the image plane coordinate system into the image coordinate system, which is given by

00137_PSISDG10452_104524O_page_3_5.jpg

Here, sx and sy are scaling factors, which represent the size of a pixel in world coordinates. The point (cx, cy) is the principal point of the image, which is the perpendicular projection of the projection center onto the image plane.

The six parameters (α, β, γ, tx, ty, tz) are the exterior camera parameters and the six parameters (f, k, sx, sy, cx, cy) are the interior camera parameters.

4.

VISION MEASUREMENT APPLICATION

In the workshop “Vision Measurement” the students have to develop several approaches to solver the camera calibration problem and use it to determine the length of the surface scratch of the object, or rectify the distorted images by themselves. The workshop takes five hours. The construction process is divided into different segments. Between the segments, theoretical basics around the calibration technology are repeated. The combination of these theoretical basics with the carried out construction steps leads to a deeper understanding.

4.1

THE PRACTICAL PART

The machine vision measurement system used in the practical lectures is shown in Figure 2. A typical machine vision system [3] in our lab is comprised by the object that is illuminated by a suitably chosen illumination, a lens that is suitably selected for the application, a camera that is used to deliver the image to a computer through a camera-computer interface (i.e. frame grabber or USB2.0), and the machine vision software (i.e. Halcon) that is used to inspect the objects and return a evaluation of the objects.

Figure 2.

The machine vision system

00137_PSISDG10452_104524O_page_4_1.jpg

Firstly, the calibration targets should be chosen for camera calibration. MVTec provides targets for a field-of-view ranging from about 10 to 300 mm, which covers most applications. The targets can be printed and used. Here are some tips to choose a proper calibration target. The size of the target should generally be between 100% and 200% of the field-of-view. When working with low resolution images, make sure the diameter of the individual dots is no less than 20 pixels, Otherwise the mark extraction may fail.

Secondly, after taking the images of a calibration target(see Figure 3), the students can write the calibration algorithm. The calibration procedure can be realized based on the camera calibration toolbox in Halcon. Afterwards, the camera intrinsic and extrinsic parameters are recovered using a closed-form solution, as seen in Table 1 and Table 2.

Figure 3.

The images of a calibration target made by the students

00137_PSISDG10452_104524O_page_5_1.jpg

Table 1.

The intrinsic camera parameters calculated by the students

fsx(μm)sy(μm)Cx(μm)Cy(μm)k
17.77868.347158.3340.75424.861290.666

Table 2.

The extrinsic camera parameters calculated by the students

αβγtxtytz
358.5510.78629111.867−10.7576−33.755277.921

Finally, the calibration results can be used for image rectification and measurement application. Figure 4 shows an application example which measures the length of scratches in world coordinates in a perspective distorted image. Figure 4(a) shows the image with a long surface scratch. The rectified image is shown in Figure 4(b), which is rectified by the calculated camera parameters above. The parameters of (sx, sy) define the conversion from pixels to ‘m’. Figure 4(c) shows the length of the long scratch in world coordinates can be calculated.

Figure 4

(a) Distorted image with scratch; (b) Rectified image; (c) Measurement result

00137_PSISDG10452_104524O_page_6_1.jpg

4.2

THE THEORETICAL PART

During the application all important theoretical basics of the camera calibration and rectification technology are explained. The spectral composition of the light and the object should be considered to make the important features of the object. The types of light sources should be considered. LEDs features longevity, fast reaction times and little power using. LEDs are the primary illumination technology in our application study. The camera models for area scan cameras are mainly explained during the calibration procedure, and then we can find the correspondence between the world points and their projection points in the image. There are a few issues the students have to deal with during every calibration process. For example, to obtain accurate camera parameters, it is important that the calibration target covers the entire field of view and that is tries to cover the range of exterior orientations as possible.

5.

CONCLUSION

With our composite form of theoretical and practical lectures, we achieved very positive results. During the vision measurement workshop, the students develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. To deepen the important theoretical basics, practical exercises are needed. During the camera calibration application, the students understand the mathematical models of area scan cameras deeply and build some practical vision measurement process by themselves, the students feature positive experiences, and their theoretical scientific backgrounds can be deepened very well through the practical exercises. But it is still necessary to improve the learning infrastructure further, not only improve the quality, but also improve the quantity.

6.

ACKNOWLEDGEMENTS

This work was supported by the Teaching Research and Reform Project of Hunan University of Arts and Science (JGZD1611), the Teaching Research and Reform Project of Hunan Provincial Education Department (247_396(2014)), Hunan Provincial Natural Science Foundation (2016JJ5002), Hunan Education Science “12th Five-Year” Plan Project (XJK014QGD009), and the School-enterprise Cooperated Innovation and Entrepreneurship Education Base of Hunan Province Optoelectronic Information Technology (394_2016), China.

REFERENCES

[1] 

Li, X.Q., “Using Halcon in practical teaching in Machine Vision Course,” Science & Technology Information, 30 172 –173 (2014). Google Scholar

[2] 

Xiao, K., Li, X.Q., Qiao, N.S., “Camera Calibration based on Halcon,” Science & Technology Information, 10 212 –214 (2016). Google Scholar

[3] 

Steger, C. and Ulrich, M., Wiedemann, C., “Machine Vision Algorithms and Applications,” 22 –47 Wiley-VCH,2008). Google Scholar
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xiaoqin Li, Jierong Guo, Xianchun Wang, Changqing Liu, and Binfang Cao "Active learning in camera calibration through vision measurement application", Proc. SPIE 10452, 14th Conference on Education and Training in Optics and Photonics: ETOP 2017, 104524O (16 August 2017); https://doi.org/10.1117/12.2269011
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Calibration

Distortion

Machine vision

Mathematical modeling

Imaging systems

RELATED CONTENT


Back to Top