PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Visual inspection is, by far, the most widely used method in aircraft surface inspection. We are currently developing a prototype remote visual inspection system, designed to facilitate testing the hypothesized feasibility and advantages of remote visual inspection of aircraft surfaces. In this paper, we describe several experiments with image understanding algorithms that were developed to aid remote visual inspection, in enhancing and recognizing surface cracks and corrosion from the live imagery of an aircraft surface. Also described in this paper are the supporting mobile robot platform that delivers the live imagery, and the inspection console through which the inspector accesses the imagery for remote inspection. We discuss preliminary results of the image understanding algorithms and speculate on their future use in aircraft surface inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated tools for semiconductor wafer defect analysis are becoming more necessary as device densities and wafer sizes continue to increase. Trends towards larger wafer formats and smaller critical dimensions have caused an exponential increase in the volume of defect data which must be analyzed and stored. To accommodate these changing factors, automatic analysis tools are required that can efficiently and robustly process the increasing amounts of data, and thus quickly characterize manufacturing processes and accelerate yield learning. During the first year of this cooperative research projected between SEMATECH and the Oak Ridge National Laboratory, a robust methodology for segmenting signature events prior to feature analysis and classification was developed. Based on the results of this segmentation procedure, a feature measurement strategy has been designed based on interviews with process engineers coupled with the analysis of approximately 1500 electronic wafermap files. In this paper, the authors represent an automated procedure to rank and select relevant features for use with a fuzzy pair-wise classifier and give examples of the efficacy of the approach taken. Results of the feature selection process are given for two uniquely different types of class data to demonstrate a general improvement in classifier performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the textile industry, the degree of fabric pilling is subjectively determined by human inspectors resulting in inconsistent quality control. The observed resistance to pilling is reported on an arbitrary scale ranging from No. 5 (no pillings) to No. 1 (very severe pilling). This paper presents a system and a methodology that counts the number of pillings on textile fabric samples automatically and classifies them into one of the pre-defined classes with repeatable accuracy while accounting for the human judgment by allowing the determination of the degree of confidence assigned to the sample's membership in each class. The system consists of an apparatus; an imaging and data processing software procedure for counting the number of pillings; and a methodology for classifying the fabric samples into one of the pre-defined classes with repeatable accuracy while accounting for human judgment. A CCD camera is used to capture successive gray scale images of the fabric sample. A series of segmentation, Radon transform, morphological filtering, and detrending operations are applied to the fabric images to determine the true pilling count. The structuring element for the morphological operations is designed such that fuzz balls (which are not pillings) are filtered. Using fuzzy membership functions, the fabric pilling count is mapped to fabric pilling resistance rating. The system has been successfully tested on a large number of fabric samples with different shades and textures provided by the textile industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a framework for fast objects recognition and its application in a system for recognition of certain electronic components on printed circuit boards (PCB) for recycling purposes. Objects in the DB and in the image are represented as attributed graph, where vertices are regions with attributes (color, shape) and edges are spatial relations between the regions (adjacent, surrounds). The task of finding model objects in the input data thus becomes a problem of inexact subgraph isomorphism finding. The suggested algorithm finds all the occurrences of all model graphs in the input graph in the presence of the low-level processing errors and model uncertainty. Using the ideas of inexact network algorithm (INA) we build a network from the model graphs, so that in cases when the models share identical substructures these substructures have to be matched only once. Because different models share the same substructures mostly in case when they belong to the same more general class, we incorporate the possibility of attribute refining in our model network. To further speed up the matching, we introduce the notion of a `key' vertex, so that recognition goes from easily recognizable substructures to more ambiguous ones. The algorithm was applied to real images of PCB's. The results show the effectiveness of INA and suggested modifications in this application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The observation of the whole space on a 360 degree(s) angle, as well as the reconstruction of a 3D scene, presumed unknown, presents an unquestionable interest in the field of robotic vision. The object of this paper is the description of a system dedicated to binocular peripheral vision, its conception being specifically linked to the problem at stake. The architecture of our system of image capture lies on the key principle which is simplification of calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Visual Surface Inspection is an important task in industrial quality control and there are a plethora of optical systems with imaging or scanning configurations at work in industry. Surface Inspection on ceramic tiles however, is a task that is still performed manually by most manufacturing companies. The main reason that the Ceramic Tile industry has not yet fully automated its inspection procedures is that the particular surfaces are produced in an extremely wide range of surface textures, colours, dimensions and patterns. Visual surface inspection systems can normally handle a limited range of surface textures. The system described here detects defects such as dents and scratches, using Dark Field Illumination'. The defects are detected by imaging deflected and scattered light respectively. Dark Field Illumination is a technique that in principle can only be used with specular surfaces. In order to cope with the tile surfaces that are not always specular (surface roughnesses ranging from O.O8tm to 1.5am), grazing incidence illumination is used to increase the ratio of specular versus diffuse reflection2. The surfaces are imaged on a Line Scan Camera, which is placed at a large angle relative to the surface normal. This can be up to 23° away from the specular direction; for separation angles wider than that the signal from scratches is too weak to be useful for an industrial application. In the resultant images, the surfaces show dark, while defects show bright. Illumination and imaging take place in a plane that is perpendicular to the direction of motion of the conveyor belt. The imaging is oblique, and the image plane forms an angle with the lens plane. This angle is specified by the Scheimpflug imaging condition. Being able to tilt the camera relative to the lens, enables the use of the small depth of field from a wide aperture imaging lens (low F number). This increases the total light throughput of the system. The image is formed on a Line Scan Camera, thus limiting the perspective distortion due to oblique imaging to one direction. The surface flaws that can be detected with the presented configuration, are topographic (bumps, dents, inclusions) as well as digs and scratches. The two different types of defects are detected by imaging deflected and scattered light respectively, on two separate camera heads.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lighting, which is an important side of quality control by artificial vision is often neglected. Scientists have to spend time and make several experiments before finding a good solution. We have chosen to study lighting of metallic objects according to the shape and the roughness of the object for an optimal defect detection. The first step of our study was to find a theoretical model of light scattering. To take into account reflection on smooth as well as rough surfaces, it is necessary to have a physical approach in the way of Beckmann and Spizzichino who based their model on electromagnetic wave theory. The general expression for surface radiance is a fairly complicated function of the angles of incidence and reflection, and the surface roughness parameters. Radiance diagrams of this model give the light intensity reflected by an elementary surface in all the directions. Thanks to this model, we are able to optimize the size and position of the source according to the shape and the roughness of the object and the type of defects to detect. In the field of artificial vision, the conceivable applications are numerous: for instance one can quote defect detection (scratches, knocks...) or dimensional control of object (swell or undulation measurement...).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study the relation between the performance of an imaging unit of a web inspection system and the final image quality is discussed. The basic idea is to analyze the results of the segmentation and feature extraction of defects in sample images as a function of imaging parameters. Determination of the quality of imaging and examples of the performance of a typical imaging unit are reviewed. The effect of the image quality on segmentation of the defects and feature extraction is analyzed in two cases: (1) The detection of small and low-contrast defects in paper inspection and (2) the depth of field considerations in steel inspection. Samples picked from the industrial manufacturing process are imaged using different imaging parameters and the defect areas in the images are segmented in order to illustrate the dependence of the system performance on the quality of imaging. Several segmentation methods are applied. These include direct thresholding, edge-based filtering, matched filtering and morphological filtering. The contrast of certain type of defects can be improved before segmentation by averaging the input data line by line. The signal processing methods presented here are computationally simple due to the need for high-speed real-time implementation in practical inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high speed, high resolution surface defect image acquisition system has been designed and implemented. The system was designed to acquire images of a wide, moving metal surface. The goal of the system is to show the factory operators the location and the shape of surface defects. Since the size of a defect is very small and the area of inspection is very large, a great amount of pixel data should be handled within a short time. In order to achieve this requirement, a large amount of data reduction is needed. The system consists of a set of CCD linescan cameras, a line image processing system, and a workstation. Among the components of the system, lighting and multicamera system are the most important parts. Six high resolution linescan cameras are used to obtain more than twenty thousand pixels per each line. The line image processing unit consists of a camera controller, frame grabbers, and defect image modules. In order to reduce the data drastically, run-length-encoding scheme was used--encoding only the locations of possible defects. These programs were implemented with FPGA chips. We show our implementation details in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a class of textures which can be represented by Markov Random Fields (MRF) model. It is well known that by changing the MRF parameters, extremely wide group of textures can be generated. However, it is not easy to model and classify a textured image, since there is no clear-cut mathematical definition of texture. Although, many classification methods exist in the literature, the success of the results heavily depends on the data type. Thus, appropriate measures which give visually meaningful representation of texture are highly desirable. In this study a new set of texture measures, namely, Mean Clique Length (MCL) and Clique Standard Deviation (CSD) is introduced. These measures are defined employing new concepts which agrees with the human visual system. The simulation experiments are performed on binary MRF texture alphabet to quantify the data by the MCL and CSD measures. Experimental results indicate that the introduced measures identify the visually similar textures much better than the mathematical distance measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic image processing systems suffer from high engineering costs that are necessary to adapt the configuration of the system to the problem under consideration. As a consequence, this paper presents an approach to reduce these costs by applying optimization methods to the setup and the configuration of an image processing system. We report on experiments in which we automatically select adequate subsets of textural features from a large set of potential candidates. In addition we tell how and why training pattern selection is used as a part of the optimization process. Finally we show how genetic programming can be used for the construction of new genetic feature sets. This method has no conventional pendant and offers an interesting way to complement the algorithms made by humans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasonic non-destructive inspection is today a routine method in industries for the detection, localization and sizing of surface and buried defects in engineering structures. However, the analysis of the huge amount of data obtained during an ultrasonic non-destructive inspection is not a simple task and usually time consuming. This is why the data are displayed in the form of images in order to take advantage of the power of visual representation of information and image processing tools used so as to speed up the analysis problem. In ultrasonic non-destructive inspection the data are displayed in the form of mainly three types of images known as B-SCAN, C-SCAN and D-SCAN displays. The work that we shall present herein is concerned with the application of an image processing algorithm on B- SCAN displays in order to detect crack defects in thick engineering structures. This algorithm is based on the Hough transform in which a gradient analysis is performed during the computation of the Hough space. In order to decide whether a defect is present in the structure or not we need to set a threshold and analyze the Hough space. Any maximum in the Hough space which is greater than the threshold represents a defect in the structure. Due to the diversity of B-SCAN displays which may or may not contain defects, the threshold is not a fixed one and depends on the Hough space obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combinatorial Probabilistic Hough Transforms (CPHTs) are a class of HTs that transform minimal subsets of points required to define an instance of the sought shape to single parameter cells, thus reducing redundant evidence. Existing CPHTs discard valuable information contained in the gradient of the object outlines. This research proposes a novel HT technique for detection of circular instances, called the C2PHT. The concept of the C2PHT is the incorporation of gradient information which results to a further reduction in the generation of redundant evidence, by transforming point- tuples to very small sets of parameter cells. Thus, the complexity of sampling is decreased to O(N2) enabling much more fertile sampling and faster detection. An additional characteristic of C2PHT is the strict conditional transformation scheme which means that only a very small fraction of feature space becomes eligible of voting and hence, an even higher suppression of correlated noise is achieved. The C2PHT allows very economic accumulator architectures to be used. In correspondence with the high reduction of redundant votes, it greatly mitigates the burden of the peak detection process. The performance of the technique is evaluated with synthetic and real-world underwater bubble images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The diffusion equation has received increasing attention in the fields of image analysis and computer vision for tasks such as image smoothing, image enhancement, feature extraction as well as dominant point detection. In this paper, the diffusion equation is applied to the inspection of surface mounted devices. It is shown experimentally that diffusion-equation-based methods could render very good discriminating features. The inspection experiments show that the correct inspection rate of the diffusion-equation- based method is very high for both training boards and test boards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A geometric model-driven strategy for measuring an object's dimensions in three dimensions using image processing is discussed. A laser range scanner is used to collected 3D data points from the scene. By constructing a geometric model from a CAD data base or from a sample object during the setup phase, an inspection index is established according to the user's inspection requirements. Useful information about the object is recorded in the inspection index to guide image processing during the later inspection phase when many similar objects are measured. The inspection index is generated off-line by analyzing the object's geometric features, their relationships with dimensions and other attributes. Examples of the inspection index as well as experimental results of rapid, on-line measuring is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose Hierarchical Distributed Template Matching, which reduces the computational cost of template matching, while maintaining the same reliability as conventional template matching. To achieve cost reduction without loss of reliability, we first evaluate the correlation of shrunken images in order to select the maximum depth of the hierarchy. Then, for each level of hierarchy, we choose a small number of template points in the original template and build a sparse distributed template. The locations of the template points are optimized, so that they yield a distinct peak in the correlation score map. Experimental results demonstrate that our method can reduce the computational cost to less than 1/10 that of conventional hierarchical template matching. We also confirmed that the precision is 0.6 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a robust-statistic-based method is proposed to implement template matching. The proposed method has two novel advantages. First, it is computationally efficient because only a very small fraction of the template pixels is used to do template matching. Second, it generates very high mainlobes and very low sidelobes because it only accumulates the gradient magnitudes of edges when the template is moved to the object center (signal focusing (SF) accumulation), and summarizes the gradient magnitudes in homogeneous regions when the template goes to other positions (interference avoiding (IA) accumulation). It is shown experimentally that compared with the normalized correlation method, the SFIA method increases the DSNR for about 11 approximately 30 db and decrease computational cost about 20 approximately 50 times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision based alignment is a fundamental feature of many types of semiconductor manufacturing equipment, and normalized correlation based pattern finding remains commercially popular as a core of the alignment systems. Despite its strengths, normalized correlation search (NCS) alone often fails to find the correct pattern when given the kind of degradations and nonlinear image variations seen in some semiconductor processes. This paper discusses the utilization of image processing prior to the execution of the NCS algorithm as a way to improve alignment reliability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe here a holographic non-destructive inspection (NDI) technology developed by Physical Optics Corporation. It is based on real-time holographic dye polymer materials and a shearographic camera, with neural network defect classification software. Holograms can be recorded in or erased from the new dye polymer material in a millisecond without wet processing, making real-time holographic NDI feasible. The shearographic NDI system, based on laser speckle interferometry, compensates for low-light conditions. Both holographic and shearographic fringes are input to the neural network system to perform automatic defect type classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.