PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The colorimetric space HSI offers numerous advantages in the analysis and treatment of colour images (H and S attributes insensitive to variations of light setting, perception of colours analogous to those of the eye, robust chromatic segmentations which are well adapted to real images). In the meantime, cameras and other image captors deliver the standard RGB signals and the RGBHSI transformations at the rate of video is still far from being mastered (incomplete transfer of space, loss of meaning of the attributes H and S in some zones, aberrations owing to the quantification of the signals upstream). To attempt to resolve some of these problems, we have chosen an unconventional method of transformation which results in the development of a mixed electronic structure (analog-numeric). All the functions (except the trigonometrical relations) necessary for the conversion are sustained by the broad band analog operators (linear functions, minima tn-signals detection, normalization). In this way, the digitalization does not appear until the end of the treatment determining the hue. Results of this method are only limited by the signal/noise ratio of the RGB signals on entrance. The improvements obtained in relation to completely digital methods are on one hand, a considerable increase in the chromatic zone of the HSI space; on the other hand, the chromatic/achromatic transition presents improved continuity allowing acceptable colour differentiations even at low level. Lastly, a final format of 3*8 bits in HSI offers a total number of discernible colours analogous to those obtained by an identical quantification of the RGB cube.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experience from actual web inspection systems has shown that the quality of the image is crucial for the performance of the whole inspection system. Even the most sophisticated image processing methods or computer hardware cannot compensate for low quality images. In this paper the key characteristics affecting the image formation and test set-ups to measure the numerical values of these characteristics are introduced. The determination of the imaging properties is based on imaging known test targets and subsequent image analysis. Modulation transfer function, vignetting, noise and pixel-to-pixel non-uniformity values for typical image forming systems are given. The sensitivity of these characteristics is estimated by measuring their numerical values as a function of some key imaging parameters like f-number or the spectral range of illumination. The information content of defective surface samples picked from a steel manufacturing process is compared to the results of the test set-up measurements to demonstrate the relation between the quality of the image formation in web inspection and the measured imaging characteristics. As a conclusion, an image formation evaluation procedure is proposed. The possibility to optimize the performance of the image formation using the information from the evaluation procedure is discussed.
Key words: visual surface inspection, web inspection, imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth calculation of an object allows computer reconstruction of the surface of the object in three dimensions. Such information provides human operators 3D measurements for visualization, diagnostic and manipulation. It can also provide the necessary coordinates for semi or fully automated operations. This paper describes a microscopic imaging system with computer vision algorithms that can obtain the depth information by making use of the shallow depth of field of microscopic lenses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a system which confirms the identity of an object based on its three-dimensional surface shape. The system's two key elements are a computer routine which manipulates a desired two-dimensional output pattern and a light projector which illuminates test objects with this pattern. The operation of the system is based on prior knowledge of the surface shape for the object of interest, as determined by a previously developed machine vision system. A projection pattern is chosen to be the identifying cue to the observer for that object. For example, a set of parallel lines, a circle, or the word "pass" could all be used to signify that a particular object is being examined. This pattern is then distorted by the computer in a way which is determined by the surface shape data and projection and viewing angles for the system. This new pattern is, in a sense, an encrypted form of the original pattern. Only the original object holds the key to "undistorting" this projection pattern so that it may take on its original form. Any other object placed into the system to be examined only further distorts the pattern. Thus, by examining the projected patterns on objects being inspected, an object of interest can be distinguished from others. The simplicity of this system gives it potential for inspection and security applications in which the key issue may not be the actual surface shape, but rather a quick verification of an object's or person's identity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent fiber optic image guides are often utilized in remote imaging systems when space is restricted, articulation is required, or for operation in hazardous environments. Conventionally, the object under test is illuminated with incoherent light, and the resulting image is transmitted to a linear or area sensor array where the image is scanned. A compact and rugged fiber optic coupled 'full field" laser scanning system could also provide benefits for certain demanding inspection tasks, particularly for acquisition of three dimensional information or for meeting very wide dynamic range requirements. In such applications the object is scanned and a point sensor or linear array is used for light gathering. In this paper fundamental factors limiting image quality and resolution are identified with empirical results used to establish a performance benchmark. The image sampling approach is critical for obtaining useful data in a high speed system. A method for real time image sampling is proposed which may also prove useful for multi-fiber beam delivery systems. The sampling method also provides several practical benefits required for high speed operation in potentially rugged and harsh environments.
Keywords: laser scanning, remote sensing, 3D imaging, fiber optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision applications frequently require uniform or near-uniform illumination of the object being imaged. Optimizing the illumination uniformity requires a precise definition of the criterion to be optimized. Previous work has considered the smoothest possible illumination at the center of the field of illumination. In this paper, we find the lighting placement that maximizes the surface area within which the illumination is uniform to within some tolerance. This definition of optimality comes closer to what is desired in practice, since one generally wishes to illuminate objects having non-zero extents. Several different types of lighting are considered: area, line, point, and ring sources. Symmetry arguments are used to reduce the complexity of the analysis in many cases. Of particular interest is how to select a lighting design to optimize the illumination uniformity across regions that can be described by simple geometric models, such as circles, squares, rectangles, regular polygons, etc.
Keywords: illumination, machine vision
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Illumination, Sensor, and Image Modeling and Analysis
Gaussian curvature is an intrinsic local shape characteristic of a smooth object surface that is invariant to orientation of the object in 3D space and viewpoint. Accurate determination of the sign of Gaussian curvature at each point on a smooth object surface (i.e., the identification of hyperbolic, elliptical and parabolic points) can provide very important information for both recognition of objects in automated vision tasks and manipulation of objects by a robot. We present a multiple illumination technique that directly identifies elliptical, hyperbolic, and parabolic points from diffuse reflection from a smooth object surface. This technique is based upon a photometric invariant involving the behavior of the image intensity gradient under varying illumination under the assumption of the image irradiance equation. The nature of this photometric invariant allowed direct segmentation of a smooth object surface according to the sign of Gaussian curvature independent of knowledge of local surface orientation, independent of diffuse surface albedo, and with only approximate knowledge of the geometry of multiple incident illumination. In comparison with photometric stereo, this new technique determines the sign of Gaussian curvature directly from image features without having to derive local surface orientation, and, does not require calibration of the reflectance map from an object of known shape of similar material or precise knowledge of all incident illuminations. We demonstrate how this segmentation technique works under conditions of simulated image noise, and actual experimental imaging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of the Gibbs Classifier over a statistically heterogeneous image can be improved if the locally stationary regions in the image are disassociated from each other through the mechanism of the interaction parameters defined at the local neighborhood level. This usually involves the construction of a line process for the image. In this paper we describe a method for constructing a line process for multisignature images based on the differential (total derivative) of the image which, when expressed statistically, can provide an a priori estimate of the line field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a high-speed range camera capable of providing on-the-fly range map assessment of printed circuit boards is described. The system uses an HeNe laser and time- space coding to achieve 180,000 range measurements per second. System spatial resolution is 0.001 X 0.001 in., and the range resolution is 0.00066 in., with a working depth of range of 3/8 in. We discuss the electro-optic design and present results from various imaging experiments using the instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A proprietary mechano-optical system was used to quantitatively characterize the topographies of aluminum adherends with the aim of identifying topographic parameters that relate to joint properties. The data obtained by the system are the coordinates of the topography function, i.e., z equals z(x,y), where z are the heights on the surface as a function of their x,y coordinates in the midplane bisecting the surface undulations of the test region. The spatial interval between coordinates is an experimental variable. The topographies and surface oxidation states of the aluminum substrates were altered by irradiation from a KrF excimer laser, where the radiant intensity, incident to the surface was 1.67 X 1013 W/m2/p. In this procedure mm2 regions were irradiated for either 5 or 20 pulses (p) per region with no overlap between regions. Irradiation with 20 p/reg. resulted in a 24% increase in ultimate joint strength and approximately 1.7 times the strain at fracture over those of the control--where the adhesive used was the same. Two experimentally determined topographic parameters are proposed to explain these results. These are the fractions of solid and void volume within the topographic surface region [i.e., between (zmax - zmin)] and the angular distributions of surface inclinations. These, we propose affect both the probabilities of crack initiation within the topographic surface region and the extents to which crack propagation to failure are inhibited. A preliminary elastic model of joint deformation is given where these parameters are shown to affect both crack initiation rates and crack propagation paths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Illumination subsystems are critical elements of most machine vision and inspection systems. The linear response of most optical detectors generally requires much more uniform illumination than the logarithmic response of the human eye to achieve a similar level of performance. Excessive illumination can saturate optical detector elements, while insufficient illumination causes excessive shot noise that can cripple system performance. Speckle from coherent or partially coherent illumination can also affect system performance. Computer simulations of machine vision and inspection systems, like SPARTA's SENSORSIM, permit accurate analysis of various illumination designs to determine their suitability for a particular application. These simulations permit rapid analysis of system configurations and variation of system parameters to identify optimal designs on a price/performance curve. SENSORSIM can also provide images or other signatures at intermediate stages of the generation process for isolation and analysis sources of degradation in a sensor system. This sort of analysis often is not possible in laboratory experiments. Although models of more limited scope may be useful for some analyses, such models cannot support analysis of phenomena that depend upon interaction among several components or subsystems. Thus, simulations like SENSORSIM provide an invaluable capability for optimizing the cost and performance of optical sensor systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tools are now available for measurement imaging system designers to model optical distortion thoroughly and reduce it through design choices. In the case of modern automated systems, however, it is frequently more feasible to calibrate out the non-linearities and perform a real- time error correction operation. This paper presents, for the case of triangulation line scan or line-of-light imagers, some design trade-offs affecting distortion and general measurement linearity and analyzes the form of post-image correction method required. The 2D, cubic nature of a general distortion characteristic might typically mandate using a 2D correction value matrix, but one can exploit the inherent 1D nature of line-scan triangulation imaging in many cases to reduce the correction array to a 1D vector. Graphics of distortion simulations and empirical mappings are presented, and applicability of the analysis to several triangulation implementations is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selecting a sensor for product inspection is dependent upon a myriad of factors some are, motion, minimum resolvable features, field of view, contrast, and speed. These factors must be matched up against the basic elements of the sensor. These basic elements, amplifier speed, readout registers, and pixel layout, size, and count, determine the performance of the sensor. It is the sensor performance which will determine the sensors ability to correctly image a scene. The following will create a guideline using the major sensor features and sensor performance and show how they relate to a scene to be imaged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Verification of computer vision theories is facilitated by the development and implementation of computer simulation systems. Computer simulation avoids the necessity of building actual camera systems; they are fast, flexible, and can be easily duplicated for use by others. In our previous work, we proposed a useful computational model to explore the image sensing process. This model decouples the photometric information and the geometric information of objects in the scene. In this paper, we further extend the proposed image sensing model to simulate the image formation of moving objects (motion) and stereo vision system. The simulation algorithms for curved objects, moving objects, and stereo imaging are presented. Based on the proposed model and algorithms, a computer simulation system called Active Vision Simulator (AVS) has been implemented. AVS can be used to simulate image formation process in a monocular (MONO mode) or binocular (STEREO mode) camera system to synthesize the images. It is useful for research on image restoration, motion analysis, depth from defocus, and algorithms for solving the correspondence problem in stereo vision. The implementation of AVS is efficient, modular, extensible, and user-friendly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D Sensing Methods and Systems I: Interferometry and Fringe Analysis
An experimental fringe projection system, composed of a video projector, a CCD camera and a reference plane, has been developed for profiling large diffuse surface. A test object is placed on the reference plane, which is considered to be at zero height. The video projector projects PC generated electronic images as fringes on the surface of the test object. The CCD camera is then used to acquire images of the test surface without viewing through a reference grating. The only constraint on the system's optical setup is non-parallel projecting and viewing axes. This system is easy to setup, adaptable for inspecting difficult sized test objects, and offers high precision phase shifting for phase measurement. However, flexibility in placement of system components introduces geometrical distortions in images of test surfaces, such as translation, rotation, skewing, scaling and perspective. As a result, surface contours can not be determined simply from phases and periods of projected fringes. The current development utilizes a simple calibration procedure to establish quantitative linear geometrical relationships between the camera, projector and reference plane. Those relationships, coupled with phase measurement, are used for determining true perspective surface contours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precision measurement using a projection moire system requires the projection of a high contrast grating pattern. This requirement is difficult to meet for very fine contour intervals. Either an extremely fine resolution image must be created or the grating must be projected at a large triangulation angle. The requirement for fine resolution imaging is difficult to achieve and still have significant depth range. Projection at a large triangulation angle using standard geometries, does not produce uniform magnification of the grating over the viewing field. We will present a unique geometry for achieving a large triangulation angle with uniform magnification of the grating over the viewing field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experiments using Liquid Crystal Televisions (LCTVs) as spatial light modulators for optical correlators, and optical input devices, have been reported upon widely. Moreover, applications of these devices for target recognition and automatic inspection systems are well documented. These systems often require the implementation of computer pre- and post- processing for image filtering and target recognition which handicaps real-time optical processing applications. It is possible to construct custom reference gratings that form a desired moire pattern when mixed with images of structurally illuminated targets. The moire patterns can be in any form, from equal depth contours, to error maps, to any arbitrary pattern desired. We have demonstrated video methods to generate such error maps in real-time. Furthermore, we have removed restrictions on the shape of the output moire contours, thus, developing a real-time automated inspection system based on the optical processing of arbitrary moire contours. We chose the moire pattern to be in the form of a Fresnel zone plate which is sent to an LCTV. Illumination of this zone plate with parallel coherent light results in a diffracted beam which produces a focused line on a detector. The result is a mixed video- optical processing system that could be used for real-time quality level sorting or other automated inspection requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of optical moire as created by the interference of two grating patterns, is a proven means to extend the range of some structured light contouring methods. As has been used for years on glass scale encoders, the moire effect allows small displacements to be measured from lines well below the resolution limit of the camera system used to view the patterns. This paper will present an overview of how this effect can be used effectively. This analysis will include formulation of means to quantify the optical leverage effect of the moire for 3D contouring applications, and a quantitative comparison of resolution limits for moire versus straight structured light. The benefits and limitations of optical moire leveraging will be discussed for good and bad applications of the technique. Finally, the optical methods that can be employed to improve the images of moire patterns will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D Sensing Methods and Systems II: Structured Light, Laser Radar and Three Dimensional Scanning Systems
A variety of techniques have been previously developed for surface topography reconstruction using CCD video images and various image processing algorithms. These include passive stereo disparity estimation using sub-area image correlation, confocal imaging (depth from focus), as well as structured light techniques. These approaches are compared on the basis of theoretical height error and algorithm complexity. A simple post-processing scheme based on the use of Fourier phase of structured light is then described, and results are shown from height measurements of tens of microns over areas on the order of a centimeter. A diode laser source is used in conjunction with a fan-beam refractive element, appropriate optical filtering, microscope, and CCD camera. This approach has application to long working-distance microscopy, closed-loop numerical control of machining, and retinal surface topography for early disease detection. Our approach offers a simple, low-cost, and real-time method of surface topography visualization and closed-loop machine tool control. Reflected laser beam quality and associated digital image filtering are considered with respect to the nature of possible surface materials measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical fiber is illuminated at a point (V) by a collimated laser beam forming an angle ((theta) ) with the fiber axis. A light cone (divergence 2(theta) , vertex V, axis coincident with the fiber) is thus generated. The 3D curve (C) resulting from intersection of the light cone with a given opaque surface is recorded by a CCD camera coaxial with the optical fiber. Numerical processing of the curve image allows determination of the polar coordinates (with origin at point V) of each point of curve (C). This device allows internal topographic inspection of concave surfaces. So it may be considered as the counterpart in polar coordinates of classic methods using lateral projection of a light plane to determine contour lines of convex surfaces in cartesian orthogonal coordinates. Applications of this method to inspection of the human bucal cavity shows strong influence of laser light diffusion within human tissue in the resolution of resulting images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sensing technique for making continuous geometric measurements of the surfaces of 3D objects is described. The sensor has several advantages over other 3D sensing technologies including the ability to quickly strobe the light source, thereby freezing any relative motion and high-precision measurement capability. The system is also robust to ambient lighting, able to measure across surface discontinuities, and capable of measuring moving objects. The technique uses a fan of laser planes to illuminate the scene and multiple solid-state video cameras to measure the stripes in the scene. The methods for disambiguating and triangulating the stripes into 3D coordinates are given and an example reconstructed scene is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light sectioning computes the object profile using the line deformation seen by a CCD camera. Due to the low magnification between the object and the CCD plane, the image line deformation is small and leads to low resolution on profile measurements. To obtain a larger image deformation, we propose to introduce an additional cylindrical lens that will magnify exclusively the horizontal direction (which contains the profile information) without having any effect on the vertical direction. The limitations of this anamorphic system, such as maximum magnification and maximum object depth, are theoretically examined. The advantages of using a cylindrical lens are mentioned and typical profiles (pyramid, cylinder...) show a gain in depth resolution of approximately 2 to 3 when compared to an earlier light sectioning system. The depth resolution, which was 0.36 mm with the previous system, is now improved to 0.15 mm for an object average distance of 350 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a large class of machine vision inspection problems which requires detecting and characterizing voids in mostly diffuse materials. Typical examples include surface chips or gouges in ceramic parts, voids in powered metal components prior to sintering, and surface distress in construction and paving materials. There are two interrelated difficulties: achieving lighting which provides enough image contrast for reliable detection; developing image processing algorithms which can distinguish reliably between voids and image artifacts. The problems are particularly acute when inspecting composite materials containing dark inclusions which can be confused for voids, such as the aggregates in cementitious materials. We have examined this problem theoretically to understand the contrast-forming mechanisms. In this paper we present theoretical methods for modeling image contrast in images of small voids. We show how to use these methods to design appropriate illuminating systems and image processing algorithms. We validate the approach by comparing the theoretical results with experimental measurements. The prototype system we will discuss uses machine vision methods to measure air voids in cured and sectioned samples of portland cement concrete. These measurements allow estimation of air entrainment--a material property which, when properly controlled, can enhance the concrete's ability to resist microcracking and structural deterioration during repeated cycles or freezing and thawing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensional measurement techniques are requested to determine distance and length measurement, for two and three dimensional profiling or for volume determination. To meet the requirements of numerous tasks in process monitoring or quality control different measurement principles are available. Out of the 3-D methods, the reported optical radar shows special advantages: * short data acquisition period, * co-axial optics, * high accuracy for the determination of the distance, and * quick and straightforward data evaluation. In this context, a system based on the method of optical radar is described in more detail. The system provides data for three-dimensional scene analysis with the spatial coordinates of a surface to be determined within a pre-defmed volume. A complete 3-D picture with 500 x 500 pixels is recorded and evaluated within less than 2 seconds. The role of speckles limiting the accuracy of the measurement is described in some detail, too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the design and implementation of an integrated imaging sensor to acquire registered depth and intensity data to determine the 3D geometry and the natural reflectance of a scene. Since a linear translation stage is used for object scanning, image sequences of constant motion along the x-axis may also be derived. The complementary information of these sources can resolve the uncertainty and ambiguity in data from a single source or provide multiple processing paths to subsequent image segmentation and understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D Sensing Methods and Systems I: Interferometry and Fringe Analysis
A robotic vision system design for industrial inspection is proposed. It implements a 2D color CCD camera with a modified focusing lens. A diffractive optic element (DOE) is etched into a small, off axis, region of the lens. A similar region of the detector is dedicated to the DOE image. This image, after some processing provides the third dimension. The output information includes a standard 2D image, the distance to a component under test and if the object is in motion, the direction and velocity towards or away from the detector plane. This system can be designed to operate at visible or infrared wavelengths for a variety of inspection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.