PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Some of the new state-of-the-art video cameras offer features which enhance vision-based sensor performance and simplify the task of integrating the cameras into a manufacturing environment. Other new features make some applications possible which previously were not. One of these new features, video frame rates greater than 30 hz, will be described in the context of an application of a six degree of freedom (DOF) sensor. Some new video cameras, such as the EG&G Reticon MC4256 and MC6464, offer very high video frame rates, which make the proposed sensor system appropriate for an application in a suspension kinematic and compliance measurement machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Verification means different modules to different people. To Roberts' initial attempt, the vision problem as a whole was a simple direct flow-chart. The verification step was the answer to the question, 'Within the vision system's capabilities, can this grouping of matches be identified (recognized) as this particular model?' There was no information feedback from this step to the other parts of the vision system, in either the case that a model was found or a hypothesis was wrongly generated. Indeed, much information can be derived at the verification stage. When we have gained sufficient confidence in a hypothesis describing image primitives (accept), then pose parameter refinement will point to additional model features in the image, which will be corroborated and later removed from the data so that other models in the image can be correctly recognized. (This includes other instances of the same model. Since we shall view all parameters in a similar manner, the instantiation of the model and camera viewpoint parameters defines the model; hence, each instantiation really does represent a different model.) If we reject individual matches then decisions must be made based on probabilistic error analysis. This will decide which part of the data was wrongly interpreted and which matches are generally inconsistent with the rest of the hypothesis generated. This paper presents current trends in verification vision systems and suggest an approach that considers all model parameters equally, easily extendable to generally curved models, and the incorporation of world knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To utilize the full potential of CCD cameras a careful design must be performed. The main contribution with respect to the final precision and reliability comes from the camera calibration. Both the precision of the estimated parameters or any functions of them (e.g., object coordinates) and the sensitivity of the system with respect to the undetected model errors are of importance. The paper describes a flexible calibration method, which can give an accuracy of 0.05 pixels (a posterior standard error of observations). No extra reference system nor any initial values for the parameters are needed. A distance bar brings the scale information and acts as a tool to stabilize the network. In the paper, particular attention is paid to the sensitivity analysis, which is shown to be very important. This is demonstrated via some network examples, which show a couple of fatal and weak cases, if careless design is performed. Finally, some recommendations for proper design are established.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical morphology is an approach to image analysis based on geometric concept of shape and size. Naturally, it provides a very effective tool for shape analysis. In this paper, we present a new morphological shape segmentation algorithm which decomposes a 2-D binary shape into a class of convex polygonal components. In this algorithm, shape information is extracted by using a number of different small shape patterns as structuring elements to probe the given shape through basic morphological operations. Basic morphological operations are also used to transform the given image so that in the transformed image global shape information is still retained and can be extracted efficiently by using only 'neighborhood' operations. The resulting algorithm is very simple and yet very effective. Decomposition examples presented show good agreements between the decomposition results and the natural structures of the given shapes. The shape segments produced can be used to construct structural shape description and for other shape analysis purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architectures for the development of image recognition algorithms must support the implementation of systematic procedures for solving image recognition problems. All too often, designers develop image recognition architectures in an ad hoc fashion which lacks the structure to meet long term needs. Vendors typically supply customers with standard image processing libraries and display tools. Combining these tools and formulating development strategies have remained stumbling blocks in the design of complete image recognition algorithm development environments. In this paper, an architecture is presented which provides a well defined framework, and at the same time is sufficiently flexible to accommodate images of multiple sensor and data types. The primary components of the architecture are: ground-truthing, preprocessing (which includes image processing and segmentation), feature extraction, classification, and performance analysis. Powerful and well defined data structures are exploited for each of the primary components. Groups of programs called tasks manipulate one or more of these data structures, each task belonging to one of the primary components. Multiple tasks can be executed in an unsupervised mode over an entire database of images. Results are then subjected to performance analysis and feedback. A description of the primary components and how they are integrated to facilitate the rapid prototyping and development of image recognition algorithms is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear convolvers can be used to perform a wide variety of important linear image processing functions. Therefore, vendors have provided a range of fast hardware linear convolvers. No comparable range of hardware is available to implement binary morphological operations, in spite of the fact that the importance of these operations is now widely recognized. Linear convolvers have sometimes been used to perform morphology in limited ways. This paper describes a general and flexible approach to the use of commercially available linear convolver hardware to carry out the operation of binary template matching, with binary erosion and dilation as special cases. The size of the structuring element (or template) is limited only by the size of the convolver kernel. Thus, while the typical morphology hardware is limited to 3 X 3 structuring elements, linear convolvers are available to implement structuring elements up to 16 X 16 in one video frame time. Furthermore, the same convolver hardware can be used for both linear and nonlinear operations during different frame times by simple software reprogramming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The manipulation of objects under visual control is one of the key tasks in the implementation of robotic, automated assembly, and flexible manufacturing systems. This paper considers one problem in this area, that of automating the packing of 2-dimensional arbitrary shapes into a 2- dimensional bounded region, also of arbitrary shape. Two techniques are examined and compared, the first technique is based on the concept of packing shapes in terms of their minimum area bounding rectangle. The second technique is based on image morphological transform techniques using interval coding, whereby the problem translates to the morphological manipulation of large structuring elements within the binary image domain. We will show that the morphological technique implemented, which we call the modified opening packing algorithm, enables efficient packing of arbitrary shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Radon transform is an important method for identifying linear features in a digital image. However, the images which the Radon transform generates are complex and require intelligent interpretation, to identify lines in the input image correctly. This article describes how the images can be pre-processed to make the spots in the Radon transform image more easily identified and describes Prolog programs which can recognize constellations of points in the Radon transform image and thereby identify geometric figures within the input image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A statistical vision system is proposed for feature detection and evidence combination. It has been successfully applied to locating segments and polygons in images. Each feature is modeled by a random vector X with a multivariate normal distribution, denoted by X approximately N((mu) X, (Sigma) x). After the transformation f(X): (X - (mu) x)t(Sigma) x-1(X - (mu) x), this model becomes a random variable (rv) with (chi) 2 distribution, then (chi) 2 test is applied to measure the similarity between data and the expectation vector of each model. Multiple statistics from the tests of local features, such as edges and corners, are combined by summation into statistics for large features such as segments and polygons. This is justified because the sum on a set of independent (chi) 2 random variables is also a (chi) 2 random variable, and the geometric meaning of the sum is equal to the integration of these addends. Therefore, information is coherently combined by summation and (chi) 2 tests are consistently applied throughout this vision system for feature detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Almost all existing noise-removal operators are constructed based on the assumption that noises have some form of distributions. However, the assumption is usually questionable or even not correct in dealing with real images. This paper presents a learning technique for generating structuring elements of morphological operators which is used to remove noises in fingerprint images. The learning technique is based on the genetic procedure, with the chromosome representing the structural elements of the morphological operators. On each iteration of the genetic procedure, some new structural elements will be generated. The usefulness of these elements are evaluated by the quality of resulting images of applying the corresponding morphological operators, followed by a thinning operation, to the fingerprints. Several factors, such as the number branching points, the number of end points, and the speed of convergence of thinning operation are considered in the evaluated formula. The best structuring elements are then selected as the desired one. No assumption about the distributions of the noises are made. Although the domain is very specific, the technique is general enough for learning structuring elements of morphological operators used in other applications. The generality is achieved by changing the evaluation rule such that all factors potentially affecting the result of applying the morphological operators can be considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An expert system has been written in the Prolog language which enables a non-expert user to specify the method of manufacture and visual appearance of small decorated cakes. The expert system has been designed to be ergonomically acceptable to non-expert users. It formulates the inspection procedures necessary by relating the method of manufacture to the appearance of a product, and from this the image processing operations are generated in the form of a Prolog+ program. Such a method enables the system to inspect new and varied products without sacrificing the complexity and robustness of inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to describe the development, design, and application of a general purpose testing, inspection, and surveillance platform for one-of-a-kind production. Products are, for example, ships and aeroplanes, or complex systems like nuclear power plants. In these areas, the earliest possible detection of faults could reliably assure the quality of products and constructions. The inspection and surveillance platform contains a flexible industrial robot in five-axis geometry-specification combined with a multi-sensor array. The multi-sensor-systems are modular and, e.g., divided into thermographic (nonvisible) techniques, gray-scale (visible) techniques, and case specific sensor-data processing systems. To improve the capacity of nondestructive testing (NDT) and inspection procedures, the image processing interface is connected to a local area network. Thus the evaluation of test results by several human or machine-vision interpreters can be done in a very effective way. Two applications are described. The first application is described as surface inspection of composite and ceramic coated materials. Here the detection of flaws and the suppression of image structures proceeding from object irregularities will be discussed. The second one deals with the automatic detection and location of leaks vessel parts. In this field the thermographic sensor system is combined with a gray-scale detector. This method is very suitable for engines, jet propulsions, and other similar components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical inspection systems are readily available for the bench-top inspection of a variety of subjects including cutting tools. However, the integration of optical tool inspection techniques into precision machining operations requires the consideration of several factors. Some of the questions that must be answered include: What kinds of tools will be used? What tool characteristics are important to measure? How are these characteristics expressed in a meaningful form that will enhance the quality of the manufacturing process? What will be done with the tool inspection data? Will the inspection be performed on-line, in real-time, to what resolution and accuracy, etc.? This paper describes the integration of an on-machine optical tool inspection/compensation system (OTICS) to a precision turning machine at the Oak Ridge Y-12 Plant. OTICS is an IBM personal computer (PC) based system that uses a vision interface board to collect cutting tool form data. This information is used by the PC to prepare a compensated part program that avoids the workpiece errors that are associated with imperfect cutting tools. Machining tests have demonstrated the system's ability to produce workpiece contour accuracies of 0.0002 in. when using cutting tools with errors as large as 0.0046 in.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface inspection systems are widely used in many industries including steel, tin, aluminum, and paper. These systems generally use machine vision technology to detect defective surface regions and can generate very high data output rates which can be difficult for line operators to absorb and use. A graphical, windowing interface is described which provides the operators with an overview of the surface quality of the inspected web while still allowing them to select individual defective regions for display. A touch screen is used as the only operator input. This required alterations to some screen widgets due to subtle ergonomic differences of touch screen input over mouse input. The interface, although developed for inspecting coated steel, has been designed to be adaptable to other surface inspection applications. Facility is provided to allow the detection, classification, and display functions of the inspection system to be readily changed. Modifications can be implemented on two main levels; changes that reflect the configuration of the hardware system and control the detection and classification components of the surface inspection system are accessible only to authorized staff while those affecting the display and alarm settings of defect types may be changed by operators and this can generally be done dynamically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The flexible inspection cell consists of an (X,Y,(theta) )-table surrounded by a set of computer controlled lamps. Several cameras are placed around the cell and a pick-and-place manipulator is able to put objects onto the table and/or remove them from it. The physical organization of the cell and the architecture of the system controller are both described. The latter consists of a set of up to eight slave image processing modules, whose actions are organized by a Prolog program. The system provides a convenient user interface, with the following features: pull-down menus, pop-up menus, on-line HELP, cursor (for investigation of image features), interactive mode (for algorithm development), pre-recorded speech output, and speech recognition (for working in hands-off mode). The flexible inspection cell provides a general purpose facility for inspecting complex objects, small-batch artifacts, and assemblies of components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High development costs of machine vision systems can be reduced by designing more general algorithms that can be used in a wide range of applications, by using modular system architectures in which new algorithms and more computational power can be added easily, depending on the needs of the given applications, and by using powerful tools for the design of, for example, new algorithms, hardware, software, optics, and illumination. This paper overviews the related research in progress at the University of Oulu. The topics to be discussed include hybrid computer architecture for machine vision, a color segmentation algorithm based on hierarchical connected components analysis, an interactive tool for performance analysis of parallel vision programs, and an object-oriented image processing library.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past two years, laserdot has developed a new machine vision system for obstacle detection by mobile robots. A pulsed laser illuminates the road in a zone from 100 to 150 meters in front of the vehicle and the backscatter is analyzed by a linear array of photodetectors connected to a computer. Each obstacle is detected and its position is determined. The distance is calculated by measuring the pulse time-of-flight, producing a complete three-dimensional image without scanning. The system was jeep-mounted for testing in a military environment at the Angers Technical Center (ETAS) in France. This article describes the laserdot vision system and its design features, as well as the test results from ETAS. The system is capable of providing information on the shape of the road in front of the vehicle, including slope and banking measurements. Lastly, the future integration of the detector in a mobile robot is detailed. These works have been supported by Direction des Recherches, Etudes et Techniques (DRET).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traffic imaging covers a range of current and potential applications. These include traffic control and analysis, license plate finding, reading and storage, violation detection and archiving, vehicle sensors, and toll collection/enforcement. Experience from commercial installations and knowledge of the system requirements have been gained over the past 10 years. Recent improvements in system component cost and performance now allow products to be applied that provide cost effective solutions to the requirements for truly intelligent vehicle/highway systems (IVHS). The United States is a country that loves to drive. The infrastructure built in the 1950s and 1960s along with the low price of gasoline created an environment where the automobiles became an accessible and intricate part of American life. The United States has spent $DLR103 billion to build 40,000 highway miles since 1956, the start of the interstate program which is nearly complete. Unfortunately, a situation has arisen where the options for dramatically improving the ability of our roadways to absorb the increasing amount of traffic is limited. This is true in other countries as well as in the United States. The number of vehicles in the world increases by over 10,000,000 each year. In the United States there are about 180 million cars, trucks, and buses and this is estimated to double in the next 30 years. Urban development, and development in general, pushes from the edge of our roadways out. This leaves little room to increase the physical amount of roadway. Americans now spend more than 1.6 billion hours a year waiting in traffic jams. It is estimated that this congestion wasted 3 billion gallons of oil or 4% of the nation's annual gas consumption. The way out of the dilemma is to increase road use efficiency as well as improve mass transportation alternatives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is described for the segmentation of colored images, having pre-processed them with a data reducing method described previously. The segmentation proposed is based on the split and merge algorithms of Pavlidis, but uses fractal methods (Peano curves) rather than quad-trees which are usually associated with the technique. The method shows a speed improvement over the original method due to reduced computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach for extracting three dimensional information, using an autofocus lens coupled to a single camera vision system. The camera gives a plain view of the subject under surveillance. The use of the autofocus lens yields the third dimension depth, needed in guidance and control operations for robotic applications. The robotic machine vision system is combined with the range determination feature and the complete system is seen as providing the essential ingredients for flexible and robust robotic guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a 3-D real-time computer vision system which is used as a support to telerobotics applications in hostile and partially unstructured environments. The human operator is always supposed to supervise and control the movement and operation of the robot, from a remote site, using appropriate sensing facilities (force reaction, vision, etc.). The main task considered here is an effective use of 3D vision information for the operator in terms of response time, quality, and quantity of the displayed information. The proposed modular hardware architecture has been designed and realized in order to accomplish the most severe tasks of image preprocessing, feature extraction, and 3D stereo matching of segment features in the scene. It represents a powerful 3D front-end processor able to provide an adequate computational power also for the following intermediate- and high-level processing stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photogrammetric stations are used for vision based dynamic control of 3-D related phenomena. The vision sensors are fixed solid-state cameras which are permanently mounted and set up for a specific control task. The on-site calibration of the station allows the continuous processing of the 3-D space coordinates for all object points according to their actual 2-D image locations For automated control processes the object points are targeted using predefined templates extracted from the perspective images. The precision of an object point measured by the station is better than 1:10,000 of the object volume in all three coordinates. The vision application presented here is the locating of car bodies in the 3-D space of a robotic sealing cell.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Often, little effort is spent on setting up a camera for an inspection or measurement system. Even though people may know theoretical formulas, they rarely bother to use them. It is more common that people use a trial and error approach. This is often the case when people select a lens and imaging configuration for a camera. This paper examines the optical portion of a camera system. Some non-threatening and easy to use tools are described. These tools encapsulate imaging theory in a convenient format. Quick estimations of basic optical designs can be obtained by entering a few parameters. Features like field of view, resolution, and depth of field can be quickly obtained before mounting a lens onto a camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturers must increase production rates and simultaneously tighten quality/process controls in order to meet ever-increasing competition and consumer demands for high quality products. This requires that products be manufactured more efficiently, at reducing cost, and with minimum scrap/waste. This in turn demands higher-speed inspection, with higher accuracy and consistency as well as intelligence. Achieving these goals will require highly parallel systems that perform image processing and pattern recognition in real time in various manufacturing environments. This paper presents a hybrid architecture combining state-of-the- art optical processing with conventional digital processing. A Solid Optical Correlator (SOC) system has been built and validated. The SOC incorporates rigidity, stability, and manufacturability--attributes which facilitate the use of the optical correlator in real-world industrial machine vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision systems are characterized by a requirement for (at least) two disparate kinds of processing: low level, highly repetitive data independent processing and high(er) level data dependent processing, often involving decision making. To date, the most efficient implementations of low level processing, typically at the pixel level, are to be found in special purpose board and chip level devices. At the higher level, however, increasingly more abstract or symbolic representations are required, and at present the capability of appropriate single processors is insufficient to match the low level component. Here parallel processing technology is being used to provide the required processing speed. This paper presents the design and implementation of one such system, in which the low level component consists of a number of datacube image processing boards, and the high-level component is provided by an array of transputers. We show how the design criteria motivate the choice of hardware and how flexible the resulting system actually is. The utility of the system and some achievable performance figures are presented in the context of Canny edge detection and a decentralized target tracker. The future development of the system is considered in the light of the forthcoming T9000 Transputer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a special architecture which detects edges of an image using the Laplacian of Gaussian (LOG) operator. Since edge detection with the LOG operator is a computation bound problem, the special architecture is designed to do parallel processing. The parallel processing is achieved by using the residue number system (RNS) and the systolic concept. The special architecture consists of an output converter, which converts the residue number to the binary number, and eight processing elements, one for each modular number. Both processing elements and output converter are designed as a systolic array. The 2 micrometers CMOS technology is used to layout the basic logic gates. Using the delay times of these gates, the special architecture is simulated with Verilog-XL. As a result of simulation, a 50 MHz clock is selected as the system clock, which is fast enough to detect edges of an image frame in a TV frame time. Hence, the special architecture can be applied to the real-time vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calibration is the process of establishing the relationship between camera and global coordinate systems. In the case of stereoscopic vision, the relationship between two cameras and a global coordinate system must be established. Many techniques have been proposed to perform the calibration process most requiring a substantial amount of programming and special test fixtures. This paper proposes a backpropagation neural network to estimate the transformation between two camera systems and a global coordinate system. The approach requires minimum programming and no special test fixtures. This paper describes the artificial neural network architecture along with the procedures used in training. Encouraging results are obtained from preliminary test runs
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision information processing involves extracting, describing, and explaining information from images of 3D environments. It can be divided into three levels according to the technical complexity and used methods: low-level vision, middle-level vision, and high-level vision. Based on the characteristic and requirement of each level, we propose a low-level vision processing module on the basis of DSP, a middle level vision processing module-PIPE, and a high level vision processing module--parallel graph reduction machine (PGR). Working on the same platform--IBM PC/AT, these 3 modules compose a hybrid vision computer. It may be an efficient solution to process images with large volume of data and critical real-time requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While linear cameras offer substantial advantages over standard television cameras for many vision applications, the lack of suitable high-performance image-processing hardware has significantly limited their potential benefits. Very powerful image-processing hardware is available for matrix cameras and, although many of these systems have provisions for interfacing to linear cameras, they restrict the inherent flexibility and power of linear cameras. The very large images and high data rates associated with linear cameras impose significant processing problems with existing linear-camera hardware designs with their very limited processing capability. Hence, a highly desirable objective is new hardware that will provide significantly improved processing capability without the need for frame buffers with their inherent restrictions on image size and format. A variety of approaches are being evaluated for enhancing linear-camera processing architectures using processing power, cost effectiveness, flexibility, and ease of programming as the primary criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Each year at harvest time millions of seed potatoes are checked for the presence of viruses by means of an Elisa test. The Potato Operation aims at automatizing the potato manipulation and pulp sampling procedure, starting from bunches of harvested potatoes and ending with the deposit of potato pulp into Elisa containers. Automatizing these manipulations addresses several issues, linking robotic and computer vision. The paper reports on the current status of this project. It first summarizes the robotic aspects, which consist of locating a potato in a bunch, grasping it, positioning it into the camera field of view, pumping the pulp sample and depositing it into a container. The computer vision aspects are then detailed. They concern locating particular potatoes in a bunch and finding the position of the best germ where the drill has to sample the pulp. The emphasis is put on the germ location problem. A general overview of the approach is given, which combines the processing of both frontal and silhouette views of the potato, together with movements of the robot arm (active vision). Frontal and silhouette analysis algorithms are then presented. Results are shown that confirm the feasibility of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many commercial materials sold in bulk form occur as particulates or pellets at some intermediate stage in their production. Detection of defects and foreign particles at this stage is a useful quality control function. This paper describes a concept and implementation for measuring contaminant count and removing undesirable material from a product stream. The system in its present form is restricted to materials that exhibit low optical loss. Many polymers and other particulates fall into this category or are sufficiently close that the system functions effectively. An optical scanner and material transport system are integrated with a computer system to perform the detection and sorting functions. Throughputs of 60 lbs/hour have been demonstrated and higher rates are possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a prototype system for leakage detection using image processing. The system detects oil, water, or vapor leaks from plant components at power plants. Its features are summarized as follows: (1) By setting the first sampled image of the scene as the reference image and storing another type of reference image as a form of an x-projection, leakage detection for oil or water, even in a steady flow, was realized; (2) Mis-detection of oil or water leaks, due to effects from vibrations of plant components and cameras could be eliminated by using non-vibrating monitoring regions; and (3) Leakage detection for oil or water required at least 200 (1X) brightness at the measured objects and a 550 X 418 (mm) vision range, while vapor leakage detection could be done in a 5000 X 3800 (mm) vision range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tasks in Machine Vision are highly computation intensive. It will take a very long time to process these tasks with conventional computers. Tightly coupled shared memory multiprocessor systems are very suitable for those tasks. However, memory access conflicts raise a serious problem in Shared Memory Multiprocessor Systems. It limits the system size and reduces the system performance. This paper will describe a Multifunction Distributed Shared Memory architecture, which makes it possible to reduce memory access conflicts to minimum and increase equivalent bandwidth of the memory system up to N times (N stands for the number of MFDSM segments mapped into the same system address). Only critical resources have to be accessed mutual exclusively. A classical correlation image matching problem is taken as an example to describe the principle and to analyze the performance of MFDSM. A software simulation package is introduced and simulation results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color images, just like the formation of gray images, are put through a series of noise- contaminating processes. To obtain reliable results in subsequent image analysis, the noise- suppressing problems in original data must be considered and the signal distortions must be minimized. Generally, noise is suppressed individually in each data channel. In this paper, we first illustrate that this technique generally can not produce optimum results and may even introduce unpredictable color spots (artifacts). In order to preserve color appearance and meanwhile not to smear out objects' boundaries, two algorithms using median type operations are then introduced; one is the scalar median filter and the other one is the vector median filter. Experiments and comparisons of both schemes are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach to stereo correspondence, i.e., stereo background using the Hough transform. The common projection process is reversed so that the stereo images are backprojected onto the actual objects in the original scene. Since indiscriminant backprojections would result in numerous false targets together with the actual objects, a Hough method is employed to decide the correct backprojected surfaces of the objects. When ambiguities occur among candidate matchings, the accumulated votes of their background planes serve as resolving criterion for the best matchings. The local processing nature of the Hough transform is exploited for the parallel implementation in transputers. Experimental results are presented and feasible extensions to this work are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient system for recovering the structural properties of objects with unknown 3-D shape. In this approach, we build adaptive maps for interpreting the object structure under uncertainty. We decompose the system into four major modules: sensing interface, knowledge base, map building and decision making, and the controller. The sensing interface acquires visual data and performs low and intermediate vision tasks for enhancing the acquired image sequences. The knowledge base contains different visual primitives and the possible exploratory actions. The map building and decision making module utilizes the different predicates that are stored in the knowledge base in order to resolve possible uncertainties. The decisions made in the map building and decision making module are then utilized by the controller module which resolves possible uncertainties. The decisions made in the map building and decision making modules are then utilized by the controller module which resolves possible inconsistencies. The above process is repeated until the map stored in the third module contains a minimal number of three dimensional interpretations that cannot be reduced further. Motion primitives are used as sensing actions. Uncertainty models are developed for the sensor and for the image processing techniques being used. Further filtering and a rejection mechanism are then developed to discard unrealistic motion and structure estimates. This system is capable of recognizing the object structure efficiently and adaptively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shape recognition for arbitrarily shaped objects is a very difficult problem in machine vision and in remote sensing. The radiometric technique described can be used for arbitrary illumination and detection coordinates to obtain surface shape. Using a high resolution CCD-based imager, the local surface curvature is shown to produce significant changes in the detected signal-to-noise ratio, sufficient to reconstruct the object piece-wise continuous shape from the object slope as a function of the projected pixel position. The technique is applied to remote sensing for digital map generation and obstacle shape determination to avoid hazards for autonomous vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a discrete-finite element method (DFEM) is formulated and then applied to solve problems arising in the process of underground tunnel excavation in rock mass with faults, which involved elasto-visco-plasticity and brittleness of rock mass and large deformation and discontinuous displacement (sliding, toppling, and fractures, etc.). The idea employed in the DFEM is as follows. The domain of interest (Omega) is divided 'adaptively' into two parts according to certain fracture rules, the continuum domain (Omega) con, and the domain of discrete block domain (Omega) dis in which large displacement and discontinuity (fractures) may occur and propagate according to certain fracture criteria. This method uses both the local static relaxation (SR) method and the dynamic relaxation (DR) method in (Omega) dis, while the traditional FEM, which can be explained as a global static relaxation method, within each discrete block in (Omega) dis and the continuum domain (Omega) con as well. By coupling these two methods within boundary elements in (Omega) dis and on the continuum boundary (partial)(Omega) dis (partial)(Omega) con, we can obtain a global iteration scheme which converges to an equilibrium state. The method has almost the same complexity as that of the hybrid method of DEM and FEM. However, it models the fracture process more naturally. A mathematical explanation of the result is that the method produces a min - max solution to the problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.