PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Direct attachment of semiconductor die to circuit substrates enables the manufacture of smaller sized, high performance electronic packages than has been historically possible. In the preferred 'flip chip' assembly process, the die and the substrate connect through a set of solder bumps on the die. The pattern of these bumps must align accurately with the corresponding attachment sites on the substrate. For reasons to be discussed direct determination of the bump pattern location is required for quality assembly. We present a new and robust method for accurately locating the solder bump pattern directly. Individual solder bumps are isolated from the background and each other using vector correlation (or generalized Hough transform) image segmentation. A two-tier classification process aligns the sample's representation with blueprint vector models. These vectors represent the individual bumps in the test die and the blueprint. The die location is referenced through two arbitrary and convenient anchor points in the model. The displacement vectors between each bump model (class) and anchor points are determined. After classification each sample bump is collapsed through two rotated vectors, the vectors being those of the corresponding class to the two anchor points. The result yields two clusters of points whose centers are viewed as the anchor points for the sample bump pattern. Correspondence of those points in the sample and blueprint spaces yields the desired location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent manufacturing technologies require an automatic inspection system instead of human verification. This is true for the analysis of printed circuit boards with complex conductor patterns and fine pitches. This paper presents a methodology for an automatic inspection and an evaluation of printed circuit boards. We use here topological information on the conductors and insulators of boards. It incorporates a feature graph consisting of skeletons with several types of nodes and branches, locations and others. Inspection is performed by comparing the standard graph created from CAD data with the inspection graph of printed circuit boards. We will discuss fundamental but important preprocessing of optical image, optimum setting of necessary parameters for comparison, and a fast comparison method using variable-length inspection points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a SIMM (Single Inline Memory Module) inspection system called SIMMI. The system inspects the front 'C-Gap' of the SIMMS and the back 'True Position' of the solder tails at 1.07 seconds/connector using 3 synchronized shuttered cameras with Telecentric Optics. This system is a very high speed (75 frames/second) inspection system designed from off-the-shelf hardware for inspecting SIMMs to a high degree of accuracy (< 4 micron repeatability). The system used standard linear and area techniques to process 700 measurements/second on an EISA based framestore hosted in 486 based PC. The paper will describe the system, and the techniques which were used to test and debug the system, which is not a trivial problem when the system is processing 75 frames/second. In particular, the paper will describe the techniques used to synchronize the camera and SIMM driver mechanics and the evolution of the lighting techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measuring the crowdedness of a public area can be very useful for preventing from the multitudinous situation in advance and for properly scheduling the frequency of services. We have developed a vision-based crowdedness measuring system for Taejon Expo '93. The system identifies human bodies by using the vision technique that detects moving objects through a series of differencing processes, and, in turn, estimates the distribution of human in wide regions. To ensure robustness on the real outdoor environment, the human detection algorithm exploits three key concepts: multiple features fusion approach, image sequence generation with varied time intervals, and high-level knowledge about the geometry of the scene. The entire venue is divided into several meaningful regions and each region is also divided into several scenes for the realtime analysis. Each scene is obtained from one of twenty-five CCD cameras which cover the critical ares of the venue. Crowdedness analysis algorithm calculates the crowdedness of each scene and combines the results into the region crowdedness. The system was fully functional during the entire period of Taejon EXPO '93.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design and performance of an automated system for inspection of multilayered plastic bags recently developed at Industrial Research Limited. The system is capable of detecting and classifying sub-millimeter defects and applying grading criteria at production rates. This system will shortly be developed further for installation in a packaging manufacturing line. Unlike other plastic web inspection systems, this system was required to discriminate between two quite distinct, and optically dissimilar, types of defect. This necessitated, in particular, careful lighting design. Furthermore, the grading process is required to accommodate seams, serial numbers and other artefacts introduced by the bag manufacturing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the key elements of a system for detecting quality defects on leather surfaces. The inspection task must treat defects like scars, mite nests, warts, open fissures, healed scars, holes, pin holes, and fat folds. The industrial detection of these defects is difficult because of the large dimensions of the leather hides (2 m X 3 m), and the small dimensions of the defects (150 micrometers X 150 micrometers ). Pattern recognition approaches suffer from the fact that defects are hidden on an irregularly textured background, and can be hardly seen visually by human graders. We describe the methods tested for automatic classification using image processing, which include preprocessing, local feature description of texture elements, and final segmentation and grading of defects. We conclude with a statistical evaluation of the recognition error rate, and an outlook on the expected industrial performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an experimental system for the combination of three areas of visual cues to aid recognition. The research is aimed at investigating the possibility of using this combination of information for scene description for the visually impaired. The areas identified as providing suitable visual cues are motion, shape and color. The combination of these provide a significant amount of information for recognition and description purposes by machine vision equipment and also allow the possibility of giving the user a more complete description of their environment. Research and development in the application of machine vision technologies to rehabilitative technologies has generally concentrated on utilizing a single visual cue. A novel method for the combination of techniques and technologies successful in machine vision is being explored. Work to date has concentrated on the integration of shape recognition, motion tracking, color extraction, speech synthesis, symbolic programming and auditory imaging of colors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Path planning of a vehicle running in a structured environment requires road boundaries evaluation for mapping its position and reducing the search area for obstacle detection. This paper describes a real time system that has been developed in the framework of the EUREKA PROMETHEUS European project and is presently under test on a Mobile Laboratory (MOBLAB). The road boundaries are detected by highlighting the large homogeneous region that lies in the bottom of the image, in front of the vehicle. Edge detection, local thresholding and morphological filtering techniques are used to define this region. Its boundaries are tracked in the sequence, relying on hypotheses of continuity of color and shape of the road to overcome drawbacks due to shadows, intersections, hidden boundaries. The proposed technique has been implemented on an integrated system based on a real time imaging processor and a workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present here the development of a texture-like measure to aid the quantification of rock face stability using two familiar transforms in a novel combination. It is shown that the Fourier and Hough transforms together can yield accurate quantitative information relating to the texture of an image. With respect to rock faces, the textural quality of the image is a direct measure of the stability index, since the orientation, distribution, and number of fissures indicate its stability. Stability of rock faces for mining operations is currently estimated manually, prior to further excavation. Manual inspection is often undesirable due to the subjective nature of, and potential hazard to, the human inspector. This provides the motivation to develop an automated system which can survey the scene via some sensors and process the resulting data to compute a preliminary stability index before further detailed inspection and subsequent excavation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the Kiwivision Multi-Transputer Module which has been developed at Industrial Research Limited. It is a MIMD architecture which is tightly coupled to external hardware compatible to the MAXbus pipeline standard. The features of Kiwivision are described with particular emphasis given to its inter-processor connectivity and data handling abilities. An implementation of the 2D fast Fourier Transform is used to illustrate the data movement strengths of the design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The role of multicamera machine vision systems for the inspection of complex artifacts and assemblies, as well as the monitoring of manufacturing process, is briefly reviewed. There is a need for a vision system which allows images obtained using several different cameras to be digitized and then described in abstract symbolic terms. The representation of an image in this way enables logical inferences to be made about it, prior to inter-relating it to similar data derived from other cameras and/or the same camera at a different time. A distributed computing system, intended specifically for this type of task, is described in this article. It comprises several standard computers, connected together using a conventional data network. Each of these computers controls dedicated image processing hardware attached to it, using a Prolog program. These are called slaves and are controlled using another computer, termed the master, which provides overall control of all slaves attached to the network. There may be as many as 32 slaves, each one being able to operate up to 32 cameras. A similar network configuration could be used to control a set of image processing sub-systems, each one being implemented in software. A prototype network, incorporating three computers, has been built and demonstrated by the authors, who are now developing a model manufacturing system, with the intention of demonstrating the effectiveness of the network in monitoring and controlling industrial processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While the view of constructive and hierarchial vision prevails, the issues of cooperation and competition among individual modules become crucial. These issues are directly related to one of the most important aspects in computer vision research: integration. A major source of difficulty in developing a consistent and systematic integration formalism is the heterogeneity existing in modules, in information, and in knowledge. In this paper, we exploit, using the central theme of grouping, the homogeneous characteristics in vision problem solving and propose a general framework, called Hierarchial Token Grouping, that facilitates vision problem solving by providing a consistent and systematic environment for integrating modules, cues, and knowledge, all in a globally coherent mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex object models require multiple components affixed to each other in specific and variable geometric paths. This paper expands upon earlier research to present an unified approach for relating components' coordinate systems to each other in the same model. Particularly, we show that rather complex relationships such as ball joints and geometric transformations about arbitrary axes are no more complicated than describing the model base in terms of the camera coordinate system. These require only simple rotations and translations about the major axes. This modeling approach was next integrated with a verification module of a model based vision system. We recovered from a single 2D image the original model and camera parameters that would align the projected model edges with the image segments by solving a nonlinear least squares system. A specific example of the theory is implemented. A lamp head is seceded to its base by a ball joint with three parameters of rotational freedom. From a wide range of initial guess error, the numerical system converged to the correct set of model and camera parameters. Thus, the theory of parameterized affixments and the numerical implementation to obtain these values from 2D images will aid in associated recognition tasks and in real-time tracking of complex conglomerate objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the robust estimation of the optical flow from time-varying images. Most of the existing methods which estimate the image motion lie within two general classes. The gradient-based method uses a relationship between the motion of surfaces and the spatial/temporal derivatives of image brightness. The feature- matching approach examines the dynamic variation of image structures such as contours. Each motion estimation technique has its strengths and weakness. The goal of this paper is to devise a model which combines the feature-matching and the gradient-based methods using multi-resolution image so that more accurate optical flow field is produced. Our optical flow estimation algorithm is basically coarse- to-fine multi-resolution scheme with the iterative registration for each resolution. At first, optical flow component along the direction of spatial gradient, i.e., normal flow, is estimated. Based upon the confidence measure for normal flow, which represents the accuracy of the estimated normal flow, full flow is obtained by an iterative weighted least squares estimation. To improve the quality of full flow, the iterative registration is applied to reduce the displaced frame difference based on the Gaussian and the Laplacian-of-Gaussian images. With the proposed fusion technique of the feature-matching using the band-pass filtered image and the gradient-based method using the low-pass filtered image, we pursue the possibility of combining two independent optical flow estimation methods based on the weighted multi-constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a low cost high speed RAM based neural network PC- expansion board using standard components is presented. The hardware board implements an architecture similar to the WISARD architecture. The system is based upon a number of look up tables addressed by the input patterns. In order to obtain a fast implementation some constraints are imposed on the addressing scheme of the look up tables. Test results have shown that the classification performance is not significantly deteriorated by these constraints. The hardware provides a new classification every 180 microsecond(s) . The application of the board to image processing tasks is discussed. The board has successfully been applied to visual guidance of an evisceration robot and, handwritten character recognition. Results for these different tasks are briefly presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A catalogue of nearly 150 different lighting and viewing techniques for industrial machine vision has been established. This is in the form of a set of interconnected HyperCard stacks. For each technique, there are two cards: one provides notes in the form of plain text, while the other shows the optical layout. The Lighting Advisor is also linked to other stacks which are being developed and which eventually include details about cameras, references to the technical literature, preparing a sample for visual inspection and for controlling a 'general-purpose' lighting system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple, low-cost device is described, which the authors have developed for prototyping industrial machine vision systems. The unit provides facilities for controlling the following devices, via a single serial (RS232) port, connected to a host computer: (a) Twelve ON/OFF mains devices (lamps, laser stripe generator, pattern projector, etc) (b) Four ON/OFF pneumatic valves (These are mounted on board the hardware module.) (c) One 8-way video multiplexor (d) Six programmable-speed serial (RS232) communication ports (e) Six opto- isolated 8-way parallel I/O ports. Using this unit, it is possible for software, running on the host computer and which contains only the most rudimentary I/O facilities, to operate a range of electro- mechanical devices. For example, a HyperCard program can switch lamps and pneumatic air lines ON/OFF, control the movements of an (X,Y,(theta) )-table and select different video cameras. These electro-mechanical devices form part of a flexible inspection cell, which the authors have built recently. This cell is being used to study the inspection of low-volume batch products, without the need for detailed instructions. The interface module has also been used to connect an image processing package, based on the Prolog programming language, to a gantry robot. This system plays dominoes against a human opponent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The flexible inspection cell comprises computer-controlled lights and cameras, located around an (X,Y,(theta) )-table. Image processing is currently performed in two small hardware units. However, a more versatile software system for image processing is under development and will be integrated with the remainder of the cell, in the near future. The authors' inspection cell is intended as a research tool for analyzing the problems encountered when inspecting small-batch and complex products. This article draws together the developments reported in several other contributions to this conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of color as a basis for segmenting images is attractive for a wide variety of industrial inspection applications, especially in the manufacturing of domestic goods, food, pharmaceuticals, toiletries and electronics. Human beings define colors, not formulae, or computer programs. Moreover, no two people have an identical view of what a color set, such as 'canary yellow' is. The article argues that teaching by showing is more relevant than the accepted methods of Color Science, in the design of factory-floor vision systems. Fast hardware for color recognition has been available for several years but has not yet received universal acceptance. This article explains how this equipment can be used in conjunction with symbolic processing software, based on the Artificial Intelligence language Prolog. Using this hardware-software system, a programmer is able to express ideas about colors in a natural way. The concepts of color set union, intersection, generalization and interpolation are all discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a general methodology for designing and testing a defect classification system for uniform web materials and a very difficult case study is used to illustrate the specific algorithms. We show that the proper selection of the sensing strategy can greatly simplify the inspection problem an increase the efficacy of the inspection system. This is shown by comparing performances of two configurations of the inspection system, one incorporating a smart sensor and the other a conventional sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An application of machine vision, incorporating neural networks, which aims to fully automate real-time radiographic inspection in welding process is described. The current methodology adopted comprises two distinct stages - the segmentation of the weld from the background content of the radiographic image, and the segmentation of suspect defect areas inside the weld region itself. In the first stage, a back propagation neural network has been employed to adaptively and accurately segment the weld region from a given image. The training of the network is achieved with a single image showing a typical weld in the run which is to be inspected, coupled with a very simple schematic weld 'template'. The second processing stage utilizes a further backpropagation network which is trained on a test set of image data previously segmented by a conventional adaptive threshold method. It is shown that the two techniques can be combined to fully segment radiographic weld images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The motivation for this work is the desire to maximize the use of visual inspection defect data by moving from quality insurance to quality assurance through active feedback into a process control system. The objective of such machine vision integration is to minimize the response time from defect detection to fault correction. The authors have previously published papers discussing the overall requirements and elements of such a closed loop system. This paper concentrates on identifying methods for establishing the causes of the visible product defects in the manufacturing process. These methods will form the diagnostic element of the overall system. The inspection data provide the symptoms which initiate and direct the diagnostic process. A qualitative model of the manufacturing process is used to generate the diagnostic hypotheses. A systems approach is required to discriminate among these competing plausible hypotheses. The approach adopted tries to relate the physical nature of how the defects in the product are actually formed with the physical manufacturing process. The various manufacturing machines are modeled in terms of their structure, behavior and function. The aim is to implicitly include the causal relationships between the visible physical defects and their causes in the model. Such relationships are normally developed based on experience in maintaining and troubleshooting the manufacturing process. Such experiential-based approaches suffer all the problems associated with expert systems for large scale complex manufacturing plants. Knowledge representation is crucial for the success of this approach. This approach has been developed for discrete event manufacturing processes and attribute defect inspection data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision systems routinely utilize structured light techniques for identifying the shapes of defects of the objects under inspection. The basic principle of the method is that any height difference from a reference plane causes a shift in the projection line of light either to left or right and up or down in the image plane of the recording camera. The height difference if due to a defect on an otherwise regular surface will result in a deformed light pattern corresponding to the dimensions of the defect. Moire patterns generated from this deformed light pattern can quantify the defect size, depth and shape. Existing machine vision systems use these techniques for the inspection of flat surfaces. Curved surface inspection although significant remains more or less unexplored. This paper presents the application of a TDI (Time Delay and Integration) camera for defect visualization on curved objects. The TDI operation and some applications of high speed TDI imaging will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Depth-from-Defocus method named STM was presented recently for stationary objects. Here we extend STM for continuous focusing of moving objects. The method is named Continuous STM or CSTM. Focusing is done by moving the lens with respect to the image detector. Two variations of CSTM - CSTM1 and CSTM2 - are presented. CSTM1 is a straight forward extension of STM described in. It involves calibration of the camera for a number (about 6 in our implementation) of discrete lens positions. In CSTM2 the camera is calibrated only for one lens position. The calibration data corresponding to other lens positions are obtained by transforming the data of the one lens position for which the camera is calibrated. In the experimental results presented here, the focusing error in lens position was about 2.25% for CSTM1 and about 3% for CSTM2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Illumination-invariant image processing is an extension of the classical technique of homomorphic filtering using a logarithmic point transformation. In this paper, traditional approaches to illumination-invariant processing are briefly reviewed and then extended using newer image processing techniques. Relevant hardware considerations are also discussed including the number of bits per pixel required for digitization, minimizing the dynamic range of the data for image processing, and camera requirements. Three applications using illumination-invariant processing techniques are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses very simple but effective one-dimensional morphological techniques for the identification of primary and secondary peak locations associated with reflected light patterns from glass surfaces. A common optical technique for measuring glass thickness and related properties is to observe light reflected from the glass surfaces. Two reflections can be observed when an appropriate structured light source is used to illuminate a glass surface. A very bright primary reflection associated with the reflection from the front surface will be observed along with a much fainter secondary reflection from the back surface. The secondary reflection is difficult to detect reliably given the large difference in magnitude between the two peaks, the presence of noise, and the varying amounts of overlap between the two peaks that can occur. The methods described in the paper have been implemented successfully for two vision applications using images acquired using standard matrix and linear cameras. The signal is preprocessed using one-dimensional morphological and linear methods to normalize the background and remove noise. Further morphological operations are performed to identify the peaks associated with primary and secondary reflections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quest for ever increasing productivity, in a modern manufacturing environment, has spurred the advancement in the application of computer technology at an ever increasing pace. Due, in part, to the desire to have these applications perform in real-time closed loop control environment, many applications dictated the incorporation of dedicated embedded hardwired processors for high data rate applications. Relegating the general computational capabilities of a host processor (typically PC based) to supervisory tasks or low data rate acquisition and analysis tasks. Manufacturer's are fast approaching the natural limits of these traditional architectures (driven more by cost issues than technological barriers) and are seeking alternatives which will increase the scope of applicability of these technologies, as well as provide increasing flexibility in applying their manufacturing resources to a wider array of tasks. In this regard a truly independent, yet tightly coupled, multiprocessing architecture offers the greatest potential for reducing the barriers to high data rate acquisition (digital image processing) based closed loop manufacturing systems. This paper would present one such alternative architecture applied in a Video Feedback Closed Loop (VFCL) manufacturing environment. The author will present the underlying theoretical basis for such systems, in addition to real-world examples of the implementation of several VFCL enabled systems an the benefits achieved through their adoption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an overview of the Real-time Imaging effort at the Center of Interfacial Engineering. Our approach to Real-time imaging in the context of interfacial engineering is presented first. We distinguish visualization and analysis of interfacial processes and products. Novel uses for Real-time Imaging are mentioned, including real-time analysis of images from the newest Near-field microscopes. The potentials of real- time image analysis applied to the understanding of process evolution is discussed next, with an example pertaining to the coating industry. Two examples of current applications of Real- time image processing are presented next: The first is a detailed discussion of the use of normalized cross-correlation to stabilize the video stream from a camera subject to vibration. A procedure for maximizing the precision of the image processing operations within the fixed 8-bit hardware architecture is presented. The second example pertains to the growth of thin films via Chemical Beam Epitaxy and the analysis of RHEED patterns, which aims at optimizing the growth process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complex log polar transform is implemented on a multiresolution foveating sensor. The foveating sensor is a new device that can be programmed to capture an image with variable size pixels (super pixels) at very high speed providing data reduction at the sensor stage. A structured scanning pattern is suggested that approximates the log polar mapping and an algorithm presented that describes how to group the scanning super pixels into the transform. Simulations of the approximate log polar transform show that changes of scale (about 4:1) and rotation (0 to 360 degrees) of an object in the input image are converted into, respectively, the horizontal and cyclicly vertical shifts in an output image (a computational map). Therefore, the task of pattern recognition is greatly simplified and can be performed on the computational map by correlation. Finally, the sensitivity of the placement of the scanning pattern to the object, or centroid mismatch, is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time delay and integration (TDI) line scan cameras generate continuous, high resolution images of surfaces moving at very high speeds and are ideally suited to continuous surface inspection. However, continuous inspection applications that use TDI line scan cameras present formidable challenges to machine vision processors. For instance, line scan images are very wide and flow continuously from the camera with no frame breaks. These infinitely long images must undergo complex 2D processing in real-time. Multiple camera outputs must be elegantly managed and processed concurrently. Frame-based vision processors have limited ability to process continuous line scan video. This paper describes a new machine vision processor based on novel hardware architecture that has been specifically designed for real-time processing of continuous line scan images. Pipelined and parallel, this architecture used a series of modular processing elements to continuously process very wide, infinitely long line scan images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dataflow environment based on a client/server approach, called WiT, enables dataflow graphs to be executed efficiently with little overhead. Data tokens are managed by reference and reside on servers until either data is requested for viewing or required by another server. An enhanced fire-on-any behavior greatly simplifies the design of many simple graph constructs such as multiplexors or crossbars which are overly complicated when implemented with classical dataflow constraints. Sync tokens are used to accommodate the need for synchronizing data, especially useful when controlling hardware. A hierarchical scheduler maintains execution sequence in a logical progression across multiple subgraphs to provide a server an opportunity to generate well structured standalone code suitable for real-time target hosts. An example of WiT using hundreds of nodes and links to model Datacube devices for a realistic application is presented. The use of hierarchical operators serves to reduce such a complex application to a manageable level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The following list is a compilation of observations, comments, suggestions, etc. based upon the authors' direct and their colleagues' experiences. It is offered in a light-hearted manner but encapsulates some important lessons that we have learned but which are unfortunately not universally acknowledged throughout the industry. We hope it will bring enlightenment and promote discussion among our colleagues. By its very nature, this list is dynamic and additions to it are always welcome. Readers who have points to add to this list are invited to contact the authors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a fundamentally new way to carry out many standard image processing operations. In comparison with conventional hardware-based and software-based approaches, SKIPSM (Separated-Kernel Image Processing using Finite State Machines) allows implementation at higher speeds and/or lower hardware cost. The key features of SKIPSM are (1) the separation of a large class of neighborhood image processing operations (generally considered not to be separable) into a row operation followed by a column operation, (2) the formulation of these row and column operations in a form compatible with pipelined operations, (3) the implementation of the resulting operations as simple finite-state machines, and (4) the automated generation of the finite-state machine configuration data. Speed increases and/or neighborhood size increased by factors of 100 or more are achieved using conventional pipelined hardware in this new way. Alternatively, inexpensive off-the-shelf 'chips' can be configured to carry out the same operations as conventional hardware. Corresponding 'speedups' are achieved in software- based implementations. Furthermore, it is often possible to use SKIPSM to carry out 10 of more different image processing operations simultaneously, with no additional processing steps or hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of SKIPSM (Separated-Kernel Image Processing using Finite State Machines) to binary morphology. In comparison with conventional hardware-based and software-based approaches, SKIPSM allow implementation at higher speeds and/or lower hardware cost. The key theoretical developments upon which this improved performance is based are the separation of 2D binary morphological image processing operations into a row operation followed by a column operation, the formulation of these row and column operations in a form compatible with pipelined operation, the implementation of the resulting operations as simple finite-state machine, and the automated generation of the finite-state machine configuration data. Some features of SKIPSM, as applied to binary morphology, are as follows: (1) The structuring elements (SEs) can be large (25 X 25 and larger) and arbitrary (with 'holes' and nonconvex shapes). (2) All types of morphology operations can be performed. (3) Multiple related or unrelated SEs can be applied simultaneously in a single pipeline pass. (4) Speed increases and/or neighborhood size increases by factors of 100 or more can be achieved. (5) Corresponding 'speedups' can be achieved in software-based implementations. (6) Inexpensive off-the-shelf 'chips' can be configured to carry out the same operations as expensive conventional hardware. (7) The user specifies the Se or set of simultaneous SEs. All other steps are automated. This paper includes some simple examples of the results and give implementation guidelines based on Se size and shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principles of SKIPSM (Separated-Kernel Image Processing using Finite State Machines), a powerful new way to implement many standard image processing operations, are presented here and in a group of companion papers. This paper describes the application of SKIPSM to the computation of the Grassfire Transform (GT), the mapping of a binary image into a grey-level image in such a way that the output grey level of each interior pixel of each individual blob is proportional to the distance of that pixel from the blob boundary. Distance can be defined in terms of various norms: Euclidean distance, elliptical distance, 'boxcar' distance, etc. While potentially very useful, the GT has seen limited application because of the many computational steps required to calculate it. In comparison with conventional hardware-based and software-based approaches, SKIPSM allows implementation of the GT at higher speeds and/or lower hardware cost. The key developments upon which this improved performance is based are (1) the separation of 2D the binary erosions on which the GT is based into row operations followed by column operations, (2) the formulation of these row and column operations in a form compatible with pipelined operation, (3) the implementation of the resulting operations as simple finite-state machines, (4) the automated generation of the finite-state machine configuration data for structuring elements (SEs), and (5) the simultaneous application of all these nested SEs in a single pipeline pass. Some key features of SKIPSM, as applied to the GT, are listed below: (1) Because the SEs can be large and arbitrary, any distance measure can be used. There is no penalty involved in using true circles or ellipses, rather that the octagons or squares resulting from sequential application of 3 X 3 SEs. (2) The simultaneous application of six circular erosion stages (SEs of size 3 X 3, 5 X 5, ..., 13 X 13) has already been demonstrated. Eight or more simultaneous circular erosion stages may be possible (sizes 3 X 3, 5 X 5, ..., 17 X 17, ...). (3) The user specifies the SE or SEs. All other steps are automated. These results are can be achieved using conventional pipelined hardware in this new way. Alternatively, inexpensive off-the-shelf 'chips' can be configured to carry out the same operations as conventional image processing hardware. Corresponding 'speedups' are achieved in software-based implementations.2347
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of SKIPSM (eparated-eme1 Jmage j.rocessing using Finite state Machines), a powerful new way to implement many standard image processing operations, is presented in a companion paper.' Other applications are presented in four other companion aper235 In comparison with conventional hardware-based and software-based approaches, SKIPSM allows implementation at higher speeds and/or lower hardware cost. The key theoretical developments upon which this improved performance is based ai . the separation of 2.-D binary image processing operations into a row operation followed by a column operation, . the formulation of these row and column operations in a form compatible with pipelined operation, . the implementation of the resulting operations as simple finite-state machines, and I the automated generation of the finite-state machine configuration data. This paper presents a general method for carrying out binary template matching, which is useful for image analysis in general and automated visual inspection and quality control in particular. Some key features of SKIPSM, as applied to binary template matching, are as follows: . Binary template matching with large, arbitrary templates can be implemented. Templates up to 35x35 and even larger are readily applied in a single pipelined pass. S Multipletemplates can be applied simultaneously in a single pass. . The user specifies the template or templates. All other steps can be automated. Speed increases and/or neighborhood size increases by factors of 100 or more can be achieved using conventional pipelined hardware in this new way. Alternatively, inexpensive off-the-shelf "chips" can be configured to carry out the same operations as more expensive conventional image processing hardware. Corresponding "speedups" are achieved in software-based implementations. This paper includes some simple examples of the results and gives implementation feasibility guidelines.
KEYWORDS: image processing, binary template matching, real-time, implementations, finite-state machines, inspection
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of SKIPSM (eparated-Kemel Jinage rocessing using Finite state Machines), a powerful new way to implement many standard image processing operations, is presented in a two companion ape2 This paper describes the application of SKJPSM to grey-level morphology, which involves . in some cases, the reformulation of the grey-level morphology problem as a set of binary morphology operations, . the separation of 2-D morphological operations into a row operation followed by a column operation, . the formulation of these row and column operations in a form compatible with pipelined operation, S the implementation of the resulting operations as simple finite-state machines, and S theautomated generation of the finite-state machine configuration data. Grey-level morphology presents some difficulties to the SKJPSM paradigm having to do with word length. In spite of this, some very useful results can be obtained. Some key features of SKIPSM, as applied to grey-level morphology, are S There is a tradeoffbetween structuring element (SE) size and number of grey levels. . The SEs can be arbitrary . With currently-available components, SEa up to 5x5 and larger can be obtained. S Jfl certain special cases, SEs up to 9x9 and larger can be obtained. . Multiple SEs can be applied simultaneously in a single pipeline pass. S The user specifies the SE or SEs. All other steps can be automated. This paper includes some simple examples of the results and gives implementation feasibility guidelines based on SE size and number of grey levels. The limitations of SKIPSM in this application all relate to the capabilities of the available RAM microchips. As chip capabilities expand, larger SE sizes and greater numbers of grey levels will become feasible.
KEYWORDS: image processing, separability, real time, implementations, finite-state machines, grey-level morphology
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of SKIPSM (eparated-eme1 jmage £rocessing using Finite state Machines) and some of its applications are presented in a set of companion pers35 This paper describes the application of SKJPSM to certain global image processing operations that are normally considered to be difficult or impossible to perform in a pipelined configuration. These expanded capabilities for pipelined systems are based on the following key theoretical developments: S the separation of certain 2-D image processing operations into a row operation followed by a column operation, S the formulation of these row and column operations in a form compatible with pipelined operation, . the implementation of the resulting operations as simple finite-state machines, and . the automated generation of the finite-state machine configuration data. The operations discussed in this paper are listed below. Many other operations are also possible. S Column,row, and area summation, either over whole images or over sub-regions. . Generation of standard images, such as grey-level wedges with various repeat cycles and directions. S Blob fill and patterned blob fill with arbitrary binary or grey-level texture patterns. . Binarymn-length encoding on the rows or columns of an image. . Multi-levelmn-length encoding on the rows or columns of an image. Speed increases and/or neighborhood size increases by factors of 100 or more can be achieved using conventional pipelined hardware in this new way. Alternatively, inexpensive off-the-shelf "chips" can be configured to carry out the same operations as conventional real-time image processing hardware. Corresponding "speedups" are achieved when the SKJPSM approach is implemented in software.
KEYWORDS: image processing, real time, implementations, finite-state machines, global, run-length encoding
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.