Specular and mirror-like surfaces are difficult to reconstruct by the fact that light of the surrounding environment is reflected off the surface. Nowadays, it is still a challenge in many application fields to get a fast and accurate reconstruction of specular objects. 3D acquisition by deflectometry involves estimating the 3D profile of a specular surface by analyzing the image of a light source or a pattern captured by a camera after reflection on the surface. These techniques require the knowledge of the location and the orientation of the camera and the orientation of the light source. If camera calibration is a process widely known and controlled and guaranteeing good accuracy, estimating the orientation of the light source is still an issue. We propose to overcome it by using a light source whose the color changes according to the orientation. Indeed, we implement a vision system of light source’s position recognition by the color in simulation with the 3D computer graphics software Blender, we define a calibration technique and we suggest some possible light sources to export our simulated vision system to reality. Plus, we explore the potential benefit of light field in order to model such a system.
In this paper, we attempt to answer to a quality control problem in the context of an industrial serial production of lower plates (wheel suspensions) for the automotive industry. These frame parts are produced by a 2000-ton stamping machine that can reach 1800 parts per hour. The quality of these parts is assessed by a visual quality control operation. This operation is time-consuming. Moreover, many factors can affect its performance, as the attention of the operators in charge, or a too rapid inspection completion time, and non-detection defects lead to high supplementary costs. To answer this issue and automate this process operation, a system based on a vision system coupled to a pre-trained Convolutional Neural Networks (Mask R-CNN)1 has been designed and implemented. In addition, an artificial enlargement of the reference image base is proposed to improve the robustness of the identification, and reduce the sensitivity of the results to potential imaging artefacts due to non-controlled environments factors such as overexposure, blur, shadows or oil fog.
This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.
A nonparametric method to define a pixel neighborhood in catadioptric images is presented. The method is based on an accurate modeling of the mirror shape by mean of polarization imaging. Unlike most processing methods existing in the literature, this method is nonparametric and enables us to respect the catadioptric image’s anamorphosis. The neighborhood is directly derived from the two polarization parameters: the angle and the degree of polarization. Regardless of the shape of the catadioptric sensor’s mirror (including noncentral configurations), image processing techniques such as image derivation, edge detection, interest point detection, as well as image matching, can be efficiently performed.
Fashion and design greatly influence the conception of manufactured products which now exhibit complex forms and shapes. Two-dimensional quality control procedures (e.g., shape, textures, colors, and 2D geometry) are progressively being replaced by 3D inspection methods (e.g., 3D geometry, colors, and texture on the 3D shape) therefore requiring a digitization of the object surface. Three dimensional surface acquisition is a topic which has been studied to a large extent, and a significant number of techniques for acquiring 3D shapes has been proposed, leading to a wide range of commercial solutions available on the market. These systems cover a wide range from micro-scale objects such as shape from focus and shape from defocus techniques, to several meter sized objects (time of flight technique). Nevertheless, the use of such systems still encounters difficulties when dealing with non-diffuse (non Lambertian) surfaces as is the case for transparent, semi-transparent, or highly reflective materials (e.g., glass, crystals, plastics, and shiny metals). We review and compare various systems and approaches which were recently developed for 3D digitization of transparent objects.
A novel reconstruction scheme using ultraviolet (UV) structured point for transparent objects three-dimensional (3D) measurement was reported in our previous works. We address two main approaches behind a low-cost system within two suggested configurations (according to the structured light generation source: UV spot and UV line) in order to improve the results accuracy, which endows our system with its adaptability for industrial applications. The first approach consists of determining the optimal configuration of the practical setup while controlling the modeling error (related to the 3D reconstruction approach) with the implementation of an additional validation method in the reconstruction process. The second approach deals with a method which tracks the fluorescence points in the presence of noises inherent to our acquisition system. The specificity of the implemented tracking method relies on spectroscopic analysis. Indeed, experimental investigation has been also carried out to characterize the application. Some digitized objects are presented with an accuracy never reached by any previously reported works dealing with the digitization of transparent objects without any prior preparation.
KEYWORDS: 3D modeling, Scanners, Solid modeling, Data acquisition, 3D acquisition, Sensors, Data modeling, Inspection, Computer aided design, Visibility
The goal of this work is to develop a complete and automatic scanning system with minimum prior information. We aim
to establish a methodology for the automation of the 3D digitization process. The paper presents a method based on the
evolution of the Bounding Box of the object during the acquisition. The registration of the data is improved through the
modeling of the positioning system. The obtained models are analyzed and inspected in order to evaluate the robustness of
our method. Tests with real objects have been performed and results of digitization are provided.
We present an ecient measure of overlap between two co-linear segments which considerably decreases the overall computational time of a Segment-based motion estimation and reconstruction algorithm already exist in literature. We also discuss the special cases where sparse sampling of the motion space for initialization of the algorithm does not result in a good solution and suggest to use dense sampling instead to overcome the problem. Finally, we demonstrate our work on a real data set.
The task of recovering the camera motion relative to the environment (ego-motion estimation) is fundamental to
many computer vision applications and this field has witnessed a wide range of approaches to this problem. Usual
approaches are based on point or line correspondences, optical flow or the so-called direct methods. We present
an algorithm for determining 3D motion and structure from one line correspondence between two perspective
images. Classical methods which use supporting lines need at least three images. In this work, however, we
show that only one supporting line correspondence belong to a planar surface in the space is enough to estimate
the camera ego-translation provided the texture on the surface close to the line is enough discriminative. Only
one line correspondence is enough and it is not necessary that two matched line segments contain the projection
of a common part of the corresponding line segment in space. We first recover camera rotation by matching
vanishing points based on the methods already exist in the literature and then recovering the camera translation.
Experimental results on both synthetic and real images prove the functionality of the proposed method.
Classical 3D inspection systems require users to coat transparent objects before measurement. Experimental techniques
via non contact measurement, suggested in literature, do not treat inter reflections. The aim of our work is to develop a
non contact 3D measurement system for transparent objects by using a polarimetric imaging method in far infrared
range. The classical approach relies on the use of orthographic model generated by a telecentric lens in practical setup.
However telecentric lenses working in far infrared range are not available. Therefore, we have to adapt pinhole model
corresponding to non-telecentric lenses for shape from polarization. In this paper we introduce a 3D reconstruction
method to exploit polarimetric imaging with perspective model.
We also propose two mathematical approaches in order to reduce reconstruction error: data analysis method to better
estimate Stokes parameters and a validation method after Stokes parameters estimation. These techniques are applicable
irrespective of the nature of the selected model and any linear system resolution.
The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.
We present a new efficient method for calibration of catadioptric sensors. The method is based on an accurate measurement of the three-dimensional parameters of the mirror through polarization imaging. While inserting a rotating polarizer between the camera and the mirror, the system is automatically calibrated without any calibration patterns. Moreover, this method permits most of the constraints related to the calibration of catadioptric systems to be relaxed. We show that, contrary to our system, the traditional methods of calibration are very sensitive to misalignment of the camera axis and the symmetry axis of the mirror. From the measurement of three-dimensional parameters, we apply the generic calibration concept to calibrate the catadioptric sensor. We also show the influence of the disturbed measurement of the parameters on the reconstruction of a synthetic scene. Finally, experiments prove the validity of the method with some preliminary results on three-dimensional reconstruction.
Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.
A new efficient method of calibration for catadioptric sensors is presented in this paper. It is based on an accurate measurement of the three-dimensional parameters of the mirror by means of polarization imaging. While inserting a rotating polarizer between the camera and the mirror, the system is automatically calibrated without and calibration patterns. Moreover it permits to relax most of the constraints related to the calibration of the catadioptric systems. We show that contrary to our system, the traditional methods of calibration are very sensitive to misalignment of the camera axis and the symmetry axis of the mirror. From the measurement of three-dimensional parameters, we apply the generic calibration concept to calibrate the catadioptric sensor. The influence of the disturbed measurement of the parameters on the reconstruction of a synthetic scene is also presented. Finally, experiments prove the validity of the method with some preliminary results on three-dimensional reconstruction.
Augmented reality is used to improve color segmentation on human's body or on precious no touch artefacts. We
propose a technique based on structured light to project texture on a real object without any contact with it. Such
techniques can be apply on medical application, archeology, industrial inspection and augmented prototyping.
Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By
projecting a light pattern onto the surface of an object and capturing images with a camera, a large number of
correspondences can be found and 3D points can be reconstructed by means of triangulation.
The surgical operations of shoulder joint are guided by various principles: osteosynthesis in the case of fracture,
osteotomy in order to correct a deformation or to modify the functioning of the joint, or implementation of articular
prosthesis. At the end of the twentieth century, many innovations in the domains of biomechanics and orthopedic
surgery have been performed. Nevertheless, theoretical and practical problems may appear during the operation (visual
field of surgeon is very limited, quality and shape of the bone is variable depending on the patient). Biomechanical
criteria of success are defined for each intervention. For example, the installation with success of prosthetic implant will
be estimated according to the degree of mobility of the new articulation, the movements of this articulation being
function of the shape of the prosthesis and of its position on its osseous support. It is not always easy to optimize the
preparation of the surgical operation for every patient, and a preliminary computer simulation would allow helping the
surgeon in its choices and its preparation of the intervention. The techniques of virtual reality allow a high degree of
immersion and allow envisaging the development of a navigation device during the operating act.
Nowadays, visual inspection is very important in the quality control for many industrial applications. However, the complexity of most 3D objects constrains the registration of range images; a complete surface is required to compare the acquired surface to the model. Range finders are very used to digitalize free form shape objects with large resolutions. Moreover, one single view is not enough to reconstruct the whole surface due to occlusions, shadows, etc. In these situations, the motion between reconstructed partial views are required to integrate all surfaces in a single model. However, the use of positioning systems is not always available or adequate due mainly to the size of the objects or the environmental conditions imposed by the precise mechanics which suffer from vibrations present in the industry. In order to solve this problem, a 3D hand sensor is developed to reconstruct 3D objects that let us to compare them with respect the original one.
The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.
Industrial reproduction, as stereography or lithography, have a lack in texture information, as they only deal with 3D reconstruction. In this paper, we provide a new technique to map texture on real 3D objects, by synthesizing a novel view from two camera images to a projector frame, considered as a camera acting in reverse. No prior information on the pose or the shape of the 3D object is necessary, however hard calibration of the complete system is needed.
A complete and practical range image sensor development is presented in this paper: from the mathematical modeling to the shape reconstruction. This scanner aims to be integrated in a larger collaborative project. The nal goal is to provide a framework to allow easy comparisons of ancient wooden items by historians. Motivations and expected results are clearly stated in accordance to nancial and easy-to-use constraints. In order to alleviate the calibration process a new calibrating pattern is proposed. The pattern allow both calibration of camera and projector. The method is validated with experimental results. Experimental results are given for the calibration process and the range image acquisition. These results have been performed on both real and synthetic data, which allows us to comment quantitative performances as well as qualitative ones. They are quite encouraging and satisfactory.
In Computer vision there are a lot of applications that are based on the 3D vision. For example, object modeling for reverse engineering in manufacturing, map building, industrial inspection and so on. However, the surface acquired by most part of sensors only represents a part of the object. To solve this problem, several images of the same object are acquired in different positions. After that, all views are transformed to the same coordinate system. This process is known as Range Image Registration and it is the goal of this work.
This work surveys the most common registration methods. These kinds of methods are used to reconstruct 3D complete models of objects. Moreover, a classification of the registration methods is presented. This classification is based on the accuracy of the results. In this survey the principal methods are classified and commented. In order to compare them, experimental results are performed using synthetic and real data. The quality of some results indicates that the result of the registration can be used to compare a real object with the 3D models of them. It can be used in manufacturing process in order to inspect the quality of the produced objects.
This paper proposes a comparative survey on techniques of vision based on invisible structured lighting. We have classified them in three distinct families: InfraRed Structured Light (IRSL), Imperceptible Structured Light (ISL) and Filtered Structured Light (FSL). For each of them, definition, minimal configuration and main applications found in the literature are given. Then, we compare them regarding to several criteria: required equipment, light pattern coding, color analysis, texture analysis, motion analysis, security, use in non-controlled environment. The description is IRSL, ISL and FSL sensors will permit to sum up these techniques; the comparison will permit to evaluate performances and efficiency of each of them. We think that this study could be useful to researchers that are looking for a compromise between stereovision and structured light vision, combining the processing tools extent of the former with the point matching reliability and simplicity of processing of the latter.
This paper presents various applications of machine vision systems. These systems are used at four strategic points in a company manufacturing pipes for the nuclear industry. For each system, the vision problematic is presented including the industrial constraints, then, the proposed solution is detailed (acquisition conditions, image processing algorithms...), finally, the implementation on the industrial line is described and results are discussed. The first system used in the R&D department controls tube deformation under high pressure and high temperature conditions. The second vision system deals with the surface inspection of outer part as well as inner part of the tubes for scratches as well as oxidation mark detection. After the lamination, tubes are heated to release the mechanical constraints which took place during the lamination process. During the heating, oxidation may occur. Based on color analysis, a machine vision system was developed to measure the oxidation time. Once manufactured, tubes are thoroughly cleaned by air propulsed plugs and packaged in boxes. A system which detects any missing or occluded tubes was realized. The results show that the nuclear industry can take important benefits from machine vision systems. The four validated and implemented applications give satisfactory results and are currently used in the factory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.