In clinical environments and pharmaceutical industrial productions, early detection of contamination by microorganisms is a key point in terms of quality control. Determination of such contaminations relies on cultures in Petri dishes, the observation of which, through the detection of colonies of microorganisms, leads to methods enabling their determinations. However, these methods show limits in terms of speed and are rather tedious. To overcome these shortcomings, a method based on image analysis and deep learning is proposed to improve both detection of microorganism colonies in Petri dishes and quality of the quantitative determination of the contamination levels.
3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.
KEYWORDS: Light sources, 3D modeling, Image processing, 3D image processing, 3D image reconstruction, Image quality, Sensors, Control systems, 3D acquisition, Cameras
Accuracy of 3D vision-based reconstruction tasks depends both on the complexity of analyzed objects and on good
viewing / illumination conditions, ensuring image quality and minimizing consequently measurement errors after
processing of acquired images. In this contribution, as a complement to an autonomous cognitive vision system
automating 3D reconstruction and using Situation Graph Trees (SGTs) as a planning / control tool, these graphs are
optimized in two steps. The first (off-line) step addresses the placement of lighting sources, with the aim to find positions
minimizing processing errors during the subsequent reconstruction steps. In the second step, on-line application of the
SGT-based control module focuses on adjustment of illumination conditions (e. g., intensity), leading eventually to
process re-planning, and enabling further to extract optimally the contour data required for 3D reconstruction. The whole
illumination optimization procedure has been fully automated and included in the dynamic (re-)planning tool for visionbased
reconstruction tasks, e. g. in view of quality control applications.
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based
reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and
built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning
tool. Such an automated quality control system requires the coordination of a set of complex processes performing
sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD
object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be
able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently
the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation,
matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze
particularly the segmentation error due to localization errors for extracted edge points supposed to belong to
lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing
these geometric features are used as quality measure to determine confidence intervals and finally to estimate the
segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to
evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis
of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown
experimental results.
KEYWORDS: 3D modeling, Light sources and illumination, Image segmentation, 3D image reconstruction, 3D image processing, Computer aided design, 3D acquisition, Solid modeling, Cognitive modeling, Visual process modeling
This paper presents an original approach for the optimal 3D reconstruction of manufactured workpieces based on a
priori planification of the task, enhanced on-line through dynamic adjustment of the lighting conditions, and built
around a cognitive intelligent sensory system using so-called Situation Graph Trees. The system takes explicitely
structural knowledge related to image acquisition conditions, type of illumination sources, contents of the scene (e. g.,
CAD models and tolerance information), etc. into account. The principle of the approach relies on two steps. First, a socalled
initialization phase, leading to the a priori task plan, collects this structural knowledge. This knowledge is
conveniently encoded, as a sub-part, in the Situation Graph Tree building the backbone of the planning system
specifying exhaustively the behavior of the application. Second, the image is iteratively evaluated under the control of
this Situation Graph Tree. The information describing the quality of the piece to analyze is thus extracted and further
exploited for, e. g., inspection tasks. Lastly, the approach enables dynamic adjustment of the Situation Graph Tree,
enabling the system to adjust itself to the actual application run-time conditions, thus providing the system with a self-learning
capability.
Industrial applications of computer vision, such as dimensional inspection, require, among other things, making automated procedures available for the analysis of gray-level image to derive application-dependent 3-D descriptions of their contents. In view of the quantitative 3-D reconstruction of an imaged object and to fully automate the evaluation process, we describe two segmentation and interpretation methods for the automated delineation of regions of interest belonging to the object and associated with free-form surfaces. The selection of the procedure to apply, either a mean shift based or level set based analysis, depends essentially on available a priori information relative to the localization and the shape of the object in the scene. The two approaches are part of a 3-D vision-based on-line inspection system. Results on images of manufactured parts acquired under realistic conditions illustrate the use of these two approaches.
KEYWORDS: Image segmentation, Image processing, Inspection, 3D modeling, 3D image processing, Manufacturing, Binary data, 3D acquisition, 3D metrology, Structured light
In this paper, we describe a segmentation and interpretation method for the automated delineation of regions of interest, belonging to an object, out of gray level images, in view of the quantitative 3D reconstruction of an imaged object. The proposed approach is part of a three dimensional vision based on-line inspection system. Results on images of manufactured parts acquired under realistic acquisition conditions illustrate the approach.
This paper presents an original approach for a vision-based quality control system, built around a cognitive intelligent sensory system. The principle of the approach relies on two steps. First, a so-called initialization phase leads to structural knowledge on image acquisition conditions, type of illumination sources, etc. Second, the image is iteratively evaluated using this knowledge and complementary information (e.g., CAD models, and tolerance information). Finally, the information describing the quality of the piece under evaluation is extracted. A further aim of the approach is to enable building strategies that determine for instance the “next best view” required for completing the current extracted object description through dynamic adjustment of the knowledge base including this description. Such techniques require primarily investigation of three areas, dealing respectively with intelligent self-reasoning 3D sensors, 3D image processing for accurate reconstruction and evaluation software for comparison of image-based measurements with CAD data. However, an essential prior step, dealing with modeling of lighting effects, is required. As a starting point, we first modeled pinpoint light sources. After having introduced in Sections 1 and 2 the objectives and principles of the approach, we present in Section 3 and 4 the implementation and modeling approach for illumination. Some first results illustrating the approach are presented in Section 5. Finally, we conclude with some future directions for improving this approach.
The contribution aims at describing a computer-based structured light imaging system to be applied to automated recovery of quantitative 3D information on sculptured surfaces, in order to take in charge (industrial) inspection/3D reconstruction tasks. Recovery is based on evaluation of images of the light pattern induced by projection into the scene of a specifically deviced parallel grid. The system has been designed for direct use in industrial environments, e.g. for integration into on-line quality control systems. Consequently, particular emphasis has been put on efforts for fulfilling requirements usually implied by this type of application, such as simplicity of set-up, application real-time, high accuracy, and low cost. This paper gives a discription of the system realized, including the algorithms specifically designed and implemented for calibration, nonambiguous labeling of the imaged fringes, and subpixel evaluation of their locations. The integration of the system into an on-line inspection system for 100% control of manufactured parts illustrates its application. Inspection is based on comparison of extracted features gained from a CAD model of the part and including tolerance information. Currently, a measurement accuracy of the order of 25 micrometers can be routinely achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.