PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Automatic industrial visual inspection systems require high speed video output and the ability to discern fine detail under an extreme range of illumination conditions. In particular the field of robotic vision requires high speed readout rates in order to process very large volumes of data in real time. A new CCD image sensor technology (DYNASENSOR T"' CCD technology) has been developed which provides a very wide dynamic range and can discern fine spatial detail with light contrast ratios of greater than a million. This technology has been applied to the realization of linear and area high speed image sensor arrays. Further, these image sensors do not exhibit saturation effects and are free of blooming even at extremely high illumination levels. The basic high speed photoelement will be described and its theory of operation will be presented. For high speed this photoelement can operate in the conductive or integration mode. The transient analysis of the device will be described. This photoelement, which can be used to form linear arrays, will be compared to a conventional photodiode operating in the integration mode. Architectures of high speed linear image sensor arrays will be discussed. Fabricated silicon high speed low noise linear image sensors of various lengths (128x1 and 1024x1) which employ the DYNASENSOR CCD photoelement as well as random access array will be described. These arrays are low noise devices with noise equivalent electrons at room temperature of 50 to 150 electrons. Effective horizontal video data rates of 250 to 400 MI-Iz can be achieved if the detector is con-figured in a linear tapped architecture. The basic photoelement, which makes use of an optimized ion implanted doped profiled channel region can detect variations in light intensity on an object of over seven orders of magnitude. This is 103 to iO4 better than any reported CCD image sensor array. The typical noise equivalent power of these arrays is less than 10-10 w/c rn 2 at a wavelength of 0.632 μm which is ideally suited for industrial applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many machine vision systems use cameras as measuring devices, for example to determine the geometry of an ob-ject, and hence the resolution of the camera and any dis-tortions in the image are very important. In this paper, the factors creating distortion and limiting resolution in a typical CCD imaging system are listed, and a method of calibration for some of these factors is outlined. Two techniques for increasing camera resolution by jittering', (taking multiple images displaced by a known sub-pixel amount) are described and some initial results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for calculating sensor resolution for a variety of robotic applications is presented. This method is based on mapping resolution requirements specified in the "task space" into the "sensor space." In general, a solution is not unique, and a criterion must be applied to solve for the sensor resolution. Two criteria are of particular interest: (1) the minimization of the maximum sensor resolution for all sensors and (2) uniform sensor resolution in the task space. This method, which is very general and may be applied to many other types of problems, is described by means of two example problems: (1) selecting joint sensors for a 2 degree of freedom (dof) lightweight robot, and (2) selecting linear position sensors for the positioning of planar polygonal parts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a VLSI processor that computes local surface orientation of range images. An orientation extraction algorithm has been chosen and enhanced for hardware realization. The processor uses a reconfigurable two-dimensional memory architecture in order to access kernel data in a parallel fashion. The intensive calculations needed by the algorithm are implemented on CMOS VLSI integrated circuits using a library of computer arithmetic cells. Multiplication, division, square and square-rooting are the operations designed in cellular arrays. However, the processor design is not dependent of this type of memory and therefore can be integrated in other architectures. The capability of real-time orientation computation is seen as an important step towards the development of higher level 3D sensors. The original algorithm and its enhancement are described. The dataflow, memory architecture, and the VLSI design of cellular arrays are presented. A discussion of the approach and results conclude this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will describe PIPE® and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modular pipelined image processing system called Kiwivision has been developed for high-speed machine vision applications. The architecture of Kiwivision is discussed and a description given of the operating software. Results of timing comparisons between Kiwivision and three other image processing systems (a DEC LSI 11/23, a Motorola MC68000 microprocessor and a 256-element SIMD array) are presented. Finally, current developments aimed at improving the performance of Kiwivision are described. These involve interfacing the pipeline processor to an array of transputers to produce a hybrid architecture, the structure of which matches that of the machine vision algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Difference-of-Low-Pass (DOLP) Transform uses a hierarchy of bandpass filters to perform size discrimination and pattern matching of objects and features in a visual field. Like the Discrete Fourier Transform (DFT), it "sorts" entities according to their size or spatial frequencies; but unlike the DFT, it also retains positional information.This positional information is essential for the very common industrial web inspection problem in which a "flaw map" must be produced - mere flaw detection (as provided by the DFT) is not enough. The DOLP Transform is usually implemented using finite-impulse-response difference-of-Gaussian (DOG) filters of progressively increasing kernel size. Various potential industrial applications have been described and demonstrated, but implementations have been hampered by the heavy computational burden involved in the generation of the Transform. This paper describes a fast implementation of Crowley's resampled DOLP Transform using commercially-available board-level hardware. With a moderate investment in hardware modules, a nine-band DOLP Transform can be computed for a 485 by 512 image in about one second. Additional hardware modules could be added to bring the speed up to 30 complete 9-band Transforms per second, if desired. Additional bands beyond the first nine, while seldom needed, require very little additional time, because the image has been repeatedly resampled down to a small size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for recovering a depth map from an active focus analysis is presented. A sharpness map can be calculated for many small regions in an image using a Gaussian pyramid to sum the output of a Laplacian. The depth of each region can be recovered by examining the sharpness map over a range of focal positions. Real-time performance is achieved through hardware that computes Gaussian and Laplacian pyramids.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A single board parallel processor designed for operations on images is described. The system is based on the Multiple Instruction, Multiple Data (MIMD) model with four independent signal processing nodes and a large global image memory. Image acquisition is provided for four simultaneous video inputs, and video output for color or monochrome display. Examples of the system's application for two dimensional FFTs, color transformations, and texture modeling are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to reason with information from a variety of sources is critical to the development of intelligent autonomous systems. Multisensor integration, or sensor fusion, is an area of research that attempts to provide a computational framework in which such perceptual reasoning can quickly and effectively be applied, enabling autonomous systems to function in unstructured, unconstrained environments. In this paper, the fundamental characteristics of the sensor fusion problem are explored. An hierarchical sensor fusion software architecture is presented as a computational framework in which information from complementary sensors is effectively combined. The concept of a sensor fusion pyramid is introduced, along with three unique computational abstractions: virtual sensors, virtual effectors, and focus of attention processing. The computing requirements of this sensor fusion architecture are investigated, and the blackboard system model is proposed as a computational methodology on which to build a sensor fusion software architecture. Finally, the Butterfly Parallel Processor is presented as a computer architecture that provides the computational capabilities required to support these intelligent systems applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical morphology applied to image processing which deals directly with shape is a more direct and faster approach to feature measurements than traditional techniques. It has grown to include many applications and architectures in image analysis. Binary morphology has been successfully extended to greyscale morphology which allows a new set of applications. In this paper, the distance transformation, skeletonization, and reconstruction algorithms using the greyscale morphology approach are described and proven to be remarkably simple. The distance transformation of an object is the minimum distance from inner points to the background of an object. The algorithm is a recursive greyscale erosion of the image with a small size structuring element. The distance can be Euclidean, chessboard, or city-block distance which depends on the selection of its structuring element. The skeleton extracted is the Medial Axis Transformation (MAT) which is produced from the result of the distance transformation. The values of the distance transform along the skeleton are maintained to represent distance to the closest boundary. We can easily reconstruct the distance transform from the skeleton by iterative greyscale dilations with the same struc-turing element. In order for this method to be useful for grey level images, a simple adaptive threshold algorithm using greyscale ero-sion with a non-linear structuring element has been developed.21 A decomposition technique which reduces the large size non-linear structuring element into a recursive operation with a small window allows real-time implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel technique to accurately measure submi-cron linewidths on photomasks and wafers. We do this by translating a phase-shifting mask across the surface containing the line whose width we wish to measure. With a Fourier-transform lens, we detect the intensity of the zero-order spatial component of the light coming from the surface as a function of the mask position. We show that the detected intensity curve varies dramatically and exhibits sharp changes in direction corresponding to the boundaries of the line. From this informa-tion the linewidth is readily apparent. We present a theoretical analysis and several computer simulations, showing that the technique is relatively independent of variations in the optical reflectance and in the height between the patterned feature and any substrate. Unlike other optical imaging methods for measur-ing linewidths, a high-resolution microscope and precise calibration are not needed. Using a laser, lateral resolution of 0.1 μm , well beyond the limit predicted by the Rayleigh criterion, is theoretically achievable. Preliminary experimental results agree well with the theoretical prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An interactive language, intended for developing intelligent image processing procedures and called SuperVision, is described. This is based on the Prolog language and incorporates facilities for controlling an interactive image processor and various external devices, such as an (X,Y)-table, camera (pan, tilt, focus and zoom), relays, solenoids, computer-controlled lighting, etc. Apart from vision, input data can be derived from a range of sensors. The application of the language will be discussed in relation to matching the skeletons derived from partially occluded flat components on a table. In addition, plans for a flexible inspection cell, intended for examining complex artifacts and those made in small quantities, will also be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An interactive language, based on Prolog and intended for developing intelligent image processing heuristics and software, has been described elsewhere. The present article describes an interactive operating environment which is intended to assist in writing applications in this new language. This makes use of the pull-down menus available on the Apple Macintosh computer. In the present implementation, menus are provided for a wide range of utility functions, certain basic image manipulation and image measurement operations, the control of electromechanical devices and the writing of predicates for intelligent image processing. A list of predicates for image understanding is also defined. This forms another important part of the environment within which SuperVision applications are developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution there are discussed various aspects of a promising new approach to the problem of texture analysis which is based on the concept of fractal geometry. The starting point of these considerations is the close resemblance of certain irregular fractal sets with textures for which the degree of roughness is characteristic. This observation forms the basis for the application of the concept of fractal dimensions as a measure for the roughness of textures. The various definitions of fractal dimensions are more or less variations of the concept of the Hausdorff dimension as it is used in the mathematical theory which in practi-ce, however, can be determined only with difficulties. There are presented various definitions of fractal dimensions and procedures for their numerical calculation. A further application of fractal geometry results from the simulation of growth processes like e.g. crack propagation by means of the model of diffusion limited aggregation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system was developed which enables the fully automated surface defect detection and evaluation using either magnetic particle or liquid penetrant testing. The different system components are
- robotics for object handling or handling of illumination and camera
- magnetic particle or liquid penetrant test procedure,
- illumination with visible and/or UV-light,
- image reception with Video-camera,
- data-acquisition and data-processing,
- on-line defect-detection and accept/reject-decision
The contribution describes the system as well as practical experiences gained in industrial installations and application demonstrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of a CIM (Computer Integrated Manufacturing) environment requires that inspection processes be linked to databases containing the design data. Increased emphasis on flexibility must not be bought at the expense of increased human involvement in producing process directives. The RPI CIM program has developed an automated 3-D inspection system which integrates a CAD database to develop preliminary inspection directives. The design of the inspection system is characterized by real-time operation, an ability to utilize 3-D data originating from both a CAD model and from points on a test piece, the automatic generation of inspection programs based on model features, and the system's sensitivity to tolerancing issues. Inspection of manufactured parts and assemblies often requires large amounts of information in the form of test probe point locations and large amounts of time to perform the inspection. By optimally locating the probe points it is possible to maintain inspection reliability using fewer test probes in a reduced amount of time. A class of sampling schemes has been developed which use part model and manufacturing process information to generate an improved probe-point location set for routine inspection in a model-based, open-loop mode. The objective of the proposed sampling scheme is to optimize the placement of the sample points on the ideal model to minimize the chance of missing a discrepancy. The algorithm which generates the inspection program bases its test-point placement on an estimate of the likelihood of a machining discrepancy at any given point on the part surface. Implemented for evaluation in 2-D, the algorithm has been selected for its ability to be extended to 3-D. Test results show that it performs favorably on a large class of surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are reviewed and discussed some of the basic ideas of a new approach to pattern recognition which is based on the collective dynamical properties of neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic optical inspection and guidance in industry requires high speed, low-cost image processing if it is to be practical. Recent developments in high speed signal processing are leading to improvements in the capabilities of machine vision systems. However, there are still a number of tasks that lie ahead if cost effective performance is to be achieved. These tasks are discussed in the context of an example. The example chosen is that of the inspection of a hypothetical industrial part. A number of the technical and economic aspects of this inspection are outlined including specification of the manufacturing requirements, development of the illumination, algorithms and hardware architecture, integration of the system components and installation of the system into the factory. The relevance of this particular example to the general problems of industrial machine vision is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advancements in machine vision technology have been substantial in recent years with the introduction of faster processors and the improvements in sensor technology. One area that is often neglected in machine vision applications is lighting considerations. For inspection of moving or non-moving parts, strobe lighting can offer unsurpassed advantages over other types of lighting. The objective of this article is to familiarize the reader with some of the techniques used in strobe light imaging with references made to real applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The digitizing of an analog video camera signal requires special techniques to accurately sample the signal. Careful attention must be paid to both amplitude and timing considerations. Specifications exist which define amplitude and timing parame-ters of so called "standard" cameras. Recent advances in CCD technology have lead to the development of high resolution line scan and area cameras. Unfortunately these cameras do not con-form to any published standard. Hardware designed to digitize these "non-standard" cameras must have a flexible architecture to allow for each cameras' particular interface requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of blob analysis or connectivity is well documented, however, I will briefly review the mechanism in order to provide a basis for further discussion. More extensive treatments may be found in references (1) and (2).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In some applications of machine vision, space limitations force the placement of the camera at an angle other than normal to the surface being inspected, resulting in perspective distortion of the image. This makes precision measurement difficult or impossible, since a given number of pixels corresponds to different distances in different parts of the image. In other applications, the object has more than one facet, so that it is impossible to get orthogonal views of all facets with a single camera. Perspective correction algorithms involve massive amounts of calculation, making them either prohibitively slow or prohibitively expensive for most inspection applications. This paper describes a method for carrying out precision perspective correction for 485 by 512 grey-level images at 30 frames per second (or 242 by 512 images at 60 fields per second) using moderately-priced commercially-available board-level hardware to do sub-pixel interpolation. In addition, a scheme for carrying out arbitrary predefined sub-pixel warping on a pixel-by-pixel basis in real time is described, using the same hardware and a relatively simple adapter/connector. Thus, lens distortion (e.g., pincushion and barrel distortion) and camera nonuniformity (e.g., nonlinear raster scan) can be "calibrated out" in real time at moderate cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system for inspecting metal parts in a production line at a rate of 300 parts per minute is described. During inspection, the parts are classified according to a wide range of predefined defect types, consisting of both structural defects (dents, bulges, scratches, splits), and textural defects (acid stains, paint, anneal, etc.). Each flaw has its own rejection criterion, which is not directly correlated to its size, shape or contrast. The image is modeled by utilizing a-priori information concerning the nature of the defects and the specific illumination configuration. We apply low level feature detection in several resolutions in order to derive the specific signature of defect. Classification is then done on the reduced feature space for flaw identification and severity decision. The algorithms are implemented with dedicated image processing hardware, working in a pipeline fashion on a dedicated synchronized video bus to achieve the high speed requirements of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a pattern recognition system based on a template matching method in industrial applications. The system can inspect various objects, after correcting the orientation. The processing time is less than 80 ms, which includes determining of the orientation of those objects, correcting of the orientation, and the template matching. It is almost the same as or less than that of the other systems which allow no rotation. In order to correct the orientation at high speed, following three techniques have been developed. Firstly, a high speed affine transformation circuit executes rotation operation in less than 8 ms for a 256 x 256 image. Secondly, other special circuits extract some features of a binary image to determine the orientation of object. Thirdly, the local-scan hardware applied to the first and second circuits scan the effective area at the scanning rate of 8 MHz. The system has three algorithms of determining the orientation. It is determined by means of the direction of the principal axis of inertia, the binary image along a circumference, and a pair of straight lines. These algorithms are selectable according to the objects. The orientation is determined with the accuracy better than +0.5 degree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The manufacture of high quality color CRTs demands high precision registration of a black matrix and three color phosphor coatings on the front panel. The fine structure of the matrix and phosphor dots must be closely controlled or brightness, white balance, and color purity will be lost. We describe here one system which measures the phosphor and matrix patterns on CRT panels before evacuation and sealing, and another system which can estimate electron-beam spot size and color convergence on completed CRTs. Both systems are designed around a commercial vision processor and CCD camera. Both observe about 1/8' areas at 2X magnification, and both have automatic focusing capability. In the CRT panel measuring system, run-length encoding and moments algorithms are used to measure phosphor and matrix parameters and registration to a precision (one sigma) of typically 1 micron, corresponding to a sub-pixel precision of 0.1 pixels. This has been confirmed by a rigorous measurement capability study. Such precision permits process control long before low quality CRTs are produced. In the electron-beam spot size and convergence estimator system, which observes the beams through the internal shadow-mask of the CRT, principal moments ellipses and minimum covering ellipses are used to model electron-beam shape in all three colors. The precision (one sigma) of this system is about 0.05 mm in both spot size and convergence, for spots about 3/4 mm in size. This is comparable to results for the best trained human observer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optoelectronic sensor system will be presented that measures distances in the range of micrometers. It is based on a triangulation principle and is expected to be integrated into coordinate mea-surement systems for high precision measurement tasks. The system consists of a laser, a linear CCD-array and a special optical system. Thus it is possible to measure complex workpieces that could not be evaluated by tactile means. Special attention is directed to surface roughness and surface tilt. These factors influence the results obtained during a scanning cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The vision guided manipulation of rigid objects has been widely studied, but the robotic handling of flexible materials such as cloth has received far less attention. Draper Laboratories has developed a vision guided, fully-automated folding-sewing device for the construction of suit sleeves. A set of robot motions requiring only 2-D vision data has been developed. A specially designed robot manipulator working in tandem with a vacuum table allows the sleeve to be held flat for image acquisition. The robot motion sequence also allows robot moves to be based on locations on the workpiece contour rather than its interior. Binary image analysis software models the contour as smooth curves connected at discontinuity points called breakpoints. Syntactic pattern recognition techniques are used to match breakpoints to a stored database and return location data on those breakpoints required by the robot or sewing modules. Work has also begun to use grey scale values to achieve sub-pixel accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new concept-will be presented that makes use of a standard coordinate measurement system. An image processing unit is integrated to reach for high precision measurement. The mechanical displacement system gives rise to the big measurement range while the image processing system enables the measurement of complex workpieces with high precision. Two different realizations emphasize the advantageous features of this concept. They also show the influence of the different elements to reach for problem adapted systems with high performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for the extraction of flying objects in the presence of a perturbed background is presented. The approach is based on a steadiness analysis of moving objects from image sequences and has been implemented on the Pipelined Image Processing Engine (PIPE). Trees are "steadier" than flying airplanes as a tree's top moves in a confined area. However, an airplane typically moves in a fixed direction for an extended period of time. This simple constraint is exploited as the basis for utilizing an object's "steadiness" in the extraction of flying objects. The algorithm proceeds in three passes. First, an image-differencing operation is used to extract flying objects and swinging objects (e.g., tree); secondly, a mask covering a swinging object's moving area is created by studying the steadiness of flying objects and swinging objects over a couple of frames; thirdly, the mask created in the second pass is used to guide the extraction of flying objects from subsequent frames. The performance of this approach has been tested on a number of sequences of synthetic and real-world images. It has been found that the algorithm is accurate and robust for extracting flying objects. A number of limitations of the algorithm have been proposed and their effects on performance have been studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An accepted and widely used method of inspecting cast metal parts involves a visual interpretation of X-ray film by a qualified radiographer. The defective areas of the cast-ing are compared to a set of reference standards, and a casting grade is assigned. This method of quality assessment is often to crude for modern applications. An automated inspection technique is presented which provides a quantitative assessment of casting qual-ity. The quantified results of the inspection are advantageous in providing information which can be used to establish an acceptance criterion that can be directly related to part performance. Careful control of geometry, exposure parameters, and inclusion of a calibration wedge during radiography enable calibrated, quantitative analysis of the defective area to be performed. Level preserved smoothing is introduced as a method of smoothing the digitized radiograph while retaining thickness calibration. This technique combined with region labeling, allows the x, y, and z dimensions of individual defects to be measured and defect statistics to be generated. A discussion of a CAD model data to reduce the number of required radiographic views of a part in accurate defect measurement is also pursued.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current state of BILDLIB, the image analysis Software at IPA, the Fraunhofer-Institute for Manufacturing Engineering and Automation, is reported. In particular the implementation of two additional fundamental sublibraries and the extension of already existing packages are discussed. With the new packages fractal di-mensions can be calculated and two dimensional images can be mapped on the unit interval by Peano- or Hilbert scanning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An introduction to the importance of the cork industry in Portugal, and the detected problems in the manual classification and inspection of cork agglomerate mosaics are presented. The present state of the studies of feasibility of the design of an automated visual classification system, to be done in collaboration with a cork mosaic manufacturer, is described, as well as the results of spectrometric studies of the reflectance in the optical spectrum of samples belonging to the various classes of mosaics. Results of the preliminary studies of the vision algorithms to use, that were made using a vidicon camera in a controlled illumination environment, are presented. To speed up the classification process, the system will use only simple global features extracted from a portion of the image inside a window. The behaviour of the classification error rate with the size of the window was studied. A short discussion about the inspection problem is also presented. Many of the flaws can be detected using standard techniques, but fractal-based models may prove to be helpful for the solution of some special cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.