PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The analysis of images can take advantage of existing knowledge; this may be denoted as data-driven or knowledge-based image analysis. One example is the use of topographic maps in the study of aerial imagery. We report on a software system for object recognition using map-guidance. Three algorithms have been developed to find areally extended objects in a digital image. Results can be applied to compare a map data base with an image; to monitor changes; to geometric and radiometric rectification; to support classification with training areas and other tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For an in-depth study of features of the celestial surface, stereoscopic imaging technique serves as a powerful and efficient tool as it brings in depth information onto the images displayed. A system registering the stereo pair of satallite images, processing the dual images for enhancement and restoration and displaying them in twin picture tubes for stereo visualization is described. As an example, edge enhancement, median filtering and histogram equalization techniques are applied to the experimental stereo pair of images to show a considerable improvement in the quality of the 3D image visualized. Bicircularly polarized viewing system is used in the stereoscopic display system enabling the display for mass viewing. A method of generating the range image and stereoscopic image pair is also indicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a consequence of the increasing number of multi-temporal and multi-source images, in remote sensing, the need of new concepts and techniques to use the time dimension, is growing rapidly up. The forecoming french satellites SPOT,for the observation of the Earth, will speed up the flow of high-resolution and repetitive data. This paper focuses on the multitemporal segmentation, extraction and analysis of remote sensing images, as a part of geometric reasoning and scene understanding. In the context of an agricultural experiment, the "Lauragais project", the following features are described: - how to individualize entities (parcels of land), on each mono-temporal image : a non-exhaustive multispectral segmentation, based on fuzzy sets approach. - how to give a geometric description of the spatial relations between the segmented entities : a geometric database to access image data on an entity-by-entity basis. - how to compare these geometric descriptions, from date to date, and to give a multitemporal description of the landscape, by mixing all these segmentation results, in a training set, for a new classification scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of segmenting aerial photographs can generally not be solved in a reasonable manner by use of the information in the image alone. In this paper we present a structured approach to the problem, which in addition to the image uses simple and explicit knowledge about the geometry of objects and their spatial relations. The method is relaxationlike: it works with hypotheses which are given weights and iterates to a sclution. The incorporation of evidence of a hypotheses is, however, explicit and uses no complex compatibilities. The approach contains three processing levels: At the lowest level, hypotheses are formed about the material type of a pixel based on intensity and variation of intensity. At the intermediate level these hypotheses are refined into region hypotheses by use of size and shape constraints. This level is also working on pixel labels. One of the main problems is that certain regions are highly nonuniform with respect to the low-level features. Nonetheless they are welldefined by neigh-boring homogeneous regions. At the highest level region labels are considered. Region hypotheses are refined on the basis of relations between regions. At the present stage the method is not used for complete segmentation. Instead, it is a step to find regions that can be matched with a map. Experimental results exist mainly for the two highest levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some non-linear digital filtering techniques are presented for fast image processing, in particular for real-time recognition of moving objects. It is outlined how these filtering techniques can reduce noise components and disturbances, giving high efficiency extraction of edges and boundaries in the processed images: one filter can be applied before edge detection to perform a non-linear smoothing; other filters can be used after edge detection to eliminate noise spikes and enhance useful patterns. The application of the abomedigital image processing techniques in a system for moving object recognition is shown. Experimental results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to electronically collect, compile, and translate large arrays of statistical information into animated visual character representations provides a new dimension for processing and communicating information. Interfacing remote sensing instrumentation and electronic digital recording devices with computerized image processing techniques make it possible to produce computer generated videographic animated simulation models from remotely sensed information. This electronic graphic format creates a new multidimensional approach for exploring and enhancing the process of communicating information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A review of application possibilities of digital image processing of remotely sensed phenomena of the earth is given. Several application examples, processed on DIBIAS, DFVLR's digital interactive Bavarian image analysis system/and representing different disciplines, are discussed, showing different possibilities of image processing. Among them are geologic, oceanographic, and cartographic examples, as well as applications in the field of atmospheric physics, correlation of Seasat SAR to a LANDSAT scene, and an evaluation of a commercial compression algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes two new techniques of image registration as applied to scenes consisting of natural terrain. The first technique is a syntactic pattern recognition approach which combines the spatial relationships of a point pattern with point classifications to accurately perform image registration. In this approach, a preprocessor analyzes each image in order to identify points of interest and to classify these points based on statistical features. A classified graph possessing perspective invariant properties is created and is converted into a classification-based grammar string. A local match analysis is performed and the best global match is con-structed. A probability-of-match metric is computed in order to evaluate match confidence. The second technique described is an isomorphic graph matching approach called Mean Neighbors (MN). A MN graph is constructed from a given point pattern taking into account the elliptical projections of real world scenes onto a two dimensional surface. This approach exploits the spatial relationships of the given points of interest but neglects the point classifications used in syntactic processing. A projective, perspective invariant graph is constructed for both the reference and sensed images and a mapping of the coincidence edges occurs. A probability of match metric is used to evaluate the confidence of the best mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new algorithm is proposed to decompose a set of binary pictures into a limited number of elementary patterns. The method consists in a development on an orthogonal basis. The output images are examined according to three following criteria : energy, entropy and visual quality. The algorithm has been applied to samples of road signs and multi-font characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is considered reasonable to suppose that highly developed sensory systems such as the human visual system will have become roughly optimised by evolution. It is therefore desirable to attempt to understand the mechanisms of such systems and to consider their application in digital image processing. Over the last few years new techniques have been employed by neurophysiologists and anatomists to explore the building bricks of the 'image pro-cessing' which precedes the act of perception. The author has pieced together various fragments of data to devise a computer simulation of these preperceptual processes. The simulation produces fragmentary line and edge information which is fully coded in terms of location, strength and orientation. It also associates connected groups of fragmentary lines and edges and analyses them statistically in terms of number, mean strength and fluctuation of strength. The coding of the fragmentary perceptual input data is such that virtually any question may be addressed to facilitate recognition of partially obscured or complex objects. The perceptual input plane is basically quiet, containing only profile data in many situations. It is therefore admirably suited to extraction of dynamic behaviour of associated profiles. Spectral coding may also be incorporated, if desired. The paper discusses in some depth the components of interactive processing which have been used in the simulation and demonstrates the variety of forms of 'perceptual' data available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95%) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30% at high SNRs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear octtrees have been introduced recently1 as a data structure capable of compacting the ten fields normally required by an octtree's node into one. Operations like projection, superposition, adjacency, mapping from and to a 2n x 2n x 2n-digital array have also been described in the above reference. In this paper the authors propose an approximate solution to the determination of the 3D border of an object represented by a linear octtree. The corresponding algorithm is shown to be executable in O(nN) time and to require less than 4N memory locations, where N is the number of black nodes of the linear octtree, and n is the resolution of the binary image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a hybrid method to detect, in real time, simple periodical structures. The method is based on a video-optical feedback system which is well adapted to the probleme since, in each feedback loop, the pattern under study is shifted in a given way. The global transfer function of such a system depends both on time and space parameters. After a theoretical study of the stability conditions of the loop, we analyse the effect of non linearities on the global contrast as a function of spatial frequency. An experimental verification of these properties is obtained with a test pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growing interest for visual information systems has given rise to several investigations on efficient image data analysis, storage and retrieval systems, characterized by the following features:
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of edge extraction techniques are local operators, thus providing only local information without providing any structural information. Therefore edge points themselves are not adequate as primitive descriptors in computer vision, and local edge points need to be linked into long, straight or slowly curving, line segments. In this paper, a simple and efficient curvilinear feature extraction algorithm using minimum spanning trees is described. The new algorithm is based on the minimum spanning trees found from the edge points. The purpose of finding minimum spanning trees is to link edge points, thus filling gaps and providing structural information. An approximation technique which transforms curvilinear features into straight lines is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A predictive Kalman-Bucy-Filter controls the matched scanning of an optical contour sensor to track contours and extract contour parameters. Contours are disassambled into segments with a steady parameter set. Parameter data are composed into a relational graph structure for syntactic pattern recognition in the field of robot vision. Immediate applications of contour tracking include analysis of threadlike molecules and aerodynamic flow patterns, control of edge welding and visual inspection. Matched scanning and Kalman Filtering cope with weak and broken contours in highly structured backgrounds. Selection of pictorial data reduces computer workload to the real time capacity of a PDP 11/23 minicomputer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image coding technique, based on a simplified description of regions composing the image, is presented. Each region of the image is made of the maximum number of adjacent picture elements (pels) whose grey level evolution contains no sharp discontinuities. The pels within the regions provide the texture information whereas the region boundary points represent the contour information. Image coding is carried out by approximating the contour information and the texture information in each region. This is done by using different global analytical functions for each component. This adaptive image coding scheme leads to compression ratios greater than 50 to 1.1
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transform coding has been generally accepted as one of the best methods used for data compression. In transform processing, the image data is mapped into a new plane and the elements with minimum variances in this domain are discarded so as to get the data compression for a given quality. In this paper, a rectangular transform is defined for a given data compression. Studies made on different types of kernels show that the distortion is mainly due to rounding-off of the data on the transformed domain. This is minimized by proper selection of the inverse transform. Taking into consideration the correlation properties of the image, an optimization procedure is adopted to determine the best kernel. Results clearly indicate that the rectangular transform can be profitably used for a restricted class of pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabalistic relaxation labeling has been shown to be useful in image processing, pattern recognition, and artificial intelligence. The approaches taken to date have been encumbered with computationally extensive summations which generally prevent real-time operation and/ or easy hardware implementation. This paper presents a new and unique approach to the relaxation labeling problem using modular, VLSI-oriented hierarchical complex operators. One of the fundamental concepts of this work is the representation of the probability distribution of the possible labels for a given object (pixel) as an ellipse, which may be summed with neighboring object's distribution ellipses, resulting in a new, relaxed label space. The mathematical development of the elliptical approach will be presented and compared to more classical approaches, and a hardware block diagram that shows the implementation of the relaxation scheme using VLSI chips will be presented. Final-ly, results will be shown which illustrate applications of the modular scheme, iteratively, to both edges and lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image processing, the segmentation is a very important step, mainly because of its influence on the feature evaluation. During the segmentation process, problems occur either due to possible ambiguities in the pixel labelling or due to frequent contacts between different components. This paper describes an iterative method, designed to segment convex poorly contrasted or touching objects which is based on the association of two notions, one related to the object colour and the other to its shape, in terms of convexity. Starting from an initial connected set of points, pixels are iteratively aggregated to that set using a colour distance. The processing stops when convexity of the iterated set is reached. Application to a particular class of convex objects, such as blood and bone marrow cells is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a procedure developed for the automated segmentation of multiple side-looking airborne radar (SLAR) images, i.e. the detection of the boundaries of agricultural fields. The segmentation procedure is based on a split-and-merge algorithm which may process up to three input images simultaneously. The input images must be in registration. A registration method is proposed which uses coarse structural descriptions of the input images as represented by attributed quartic picture trees (QPT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new image segmentation technique based on minimum spanning trees is proposed. The motivation for using minimum spanning trees is their apparent ability of Gestalt clustering, thus relating the segmentation algorithm to Gestalt principles of perceptual organization. Several examples of segmentation using the new algorithm demonstrate the closeness between the results and human perception. The new algorithm is extremely flexible in accomodating different objectives and criteria of segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In hybrid signal processing, the transformation is performed using optical masks, which makes the negative elements in the transform matrix non-realizable. A transform matrix composed of all positive terms is non-orthogonal and non-unitary. Although optical masks realizing various shades of grays are available, there are definite advantages in utilizing binary masks. These masks can be used to realize a saturated form for the Hadamard transform. We have analyzed use of non-orthogonal transforms, in general, and saturated Hadamard transforms, specifically for compressing the bandwidth of digital imagery. In contrary to the current belief that use of saturated transforms results in loss of performance, we have shown that saturated transforms perform as well as the standard transforms. This is based on a new approach for constructing the signal from an incomplete set of transform coefficients that we believe has not been considered previously. The new approach uses a pseudoinverse of the saturated transform matrices in reconstructing the signal value from an incomplete set of transform coefficients. Computer simulations verify the results for Hadamard and cosine transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an adaptive differential pulse code modulation /ADPCH/ coder with adaptive Prediction is presented. We Present the DPC;1 model of original image. Considering the proposed adaptive DPCM picture coding system. Finally, we discuss the main conclusion of this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The basic set-up consists in a Xenon short arc source illuminating a double monochromator. The Xenon arc spectrum is displayed into a rectangular area in the output plane of the first monochromator by sphero-cylindrical optics. In this plane a mask is inserted. Its shape is computed from data about power spectra of both the Xenon source and the desired final light beam. The mask spatially selects the proportion of each input spectral component wich must be transmitted to obtain the final spectral distribution. Then, the second monochromator (symmetrical to the first), reconstructs a parallel light beam. Finally, any power spectrum can be produced in that way. Two applications are presented here. First, a precise solar simulator is proposed. The mask is drawn after photographic recording and microdensitometric analysis of both sun and Xenon lamp power spectra. The spectral densities are converted into spectral irradiance functions, whose ratio gives the shape of the mask. According to this method we can generate the solar power spectrum from that of a Xenon lamp, with a spectral resolvance of less than a few nm. The second application deals with optical data processing. A previous work showed that in order to increase the amount of optical information extracted from Latin inscriptions, the illuminating power spectrum must be optimized. We show that it can be carried out with such a system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image analysis is a powerful means to solve automatically many problems in industrial applications. In that purpose, the realization of an efficient prototype for color based sorting of fruits, has been undertaken in the laboratory. The prototype has to sort apples, as well as different kinds of red apples than different kinds of green apples. The device is able to sort simultaneously apples on several conveyor-belts. i Versus most of existing devices, which merely average the radiometries of a part of the fruit, by analogical means, our system analyses the whole surface. By taking each pixel information into account after digitization of the image, we can avoid the drawback to do not care about color non-homogeneity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two ways of representing 2D signals as 4D spectrograms and their relation to the Wigner Distribution Function (WDF) are discussed. Some methods for their coherent optical generation and suitable display, e.g. partially sampled, are given. Furthermore it is shown how the Local Power Spectra (LPS) representation serves for the computation of a kind of tex-ture gradient and how the Spectral Power Excerpts (SPE) represenation proved to be useful in automatic interferogram evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for the inspection of textile webs is proposed. The normal web structure is characterized by the micro-texture of 3x3 neighbourhoods which is extracted by principal components. Local 'rectification' of principal component images yields feature planes which can be fed to a classificator. Preliminary results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some methods for detecting orientation of mechanical parts, based on the generalized Hough Transform, are described. Beyond these methodological descriptions,in order to take into account some applicative requirements in the robotics field, we describe some techniques for implementing, in digital computer, efficient algorithms capable of detecting part orientation, in addition to its recognition and location in the scene. Some preliminary results about the methodology used are given in the last part of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive image processing has proved to be a valuable aid to prototype development for industrial inspection systems. This paper advocates extending its use to exploratory analysis of robot vision applications. Preliminary studies have shown that it is equally effective in this role, although it is not usually possible to achieve the computational speeds needed for real-time control of the robot using a software-based image processor. Its use, as in inspection research, is likely to be limited to algorithm design/selection. The Autoview image processor (British Robotic Systems Ltd.) has recently been interfaced to a Placemate 5 robot (Pendar Robotics Ltd.) and further programmable manipulation devices, including an xy-coordinate table and a stepping turntable are currently being connected. Using these and similar devices, research will be conducted into such tasks as assembly, Dalletisinq and robot-assisted inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This communication describes a prototype system designed for the visual inspection of sintered carbide inserts, used for metal working applications: it detects the presence of chips on their cutting edges and performs a tight dimensional control. The insert to be inspected is placed in front of a patented laser lighting system. The laser beam is shaped, focused and directed on the cutting edge of the insert, where it forms a luminous spot. By monitoring the position, the shape and the intensity of this spot, while the insert is rotating in front of the lighting system, it is possible to measure the dimensions of the insert, to check the sharpness of its cutting edge, and to detect on this latter the presence of chippage. A linear CCU array of 204B elements is used for image sensing. The line-scan rate of this linear camera is synchronized with the stepper motor which rotates the insert, thus allowing the acquisition of a video line for each angular step. Each line represents a diameter of the inspected insert. The video signal is digitized with four bits and preprocessed in real-time in order to filter out spurious information and to reduce the volume of the data prior to its transmission on an LSI-11/23 microcomputer. This latter takes charge of the necessary computation for the dimensional control of the inspected part and the detection and classification of the chips that may be present on the cutting edges. These operations are based on image comparison and matching techniques, allowing the detection of differences between the incoming data and a reference, stored in advance during an interactive teaching phase. The whole inspection procedure has been optimized in speed, in order to perform a full inspection cycle in approximately two seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vision system has been developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart. The system uses a controlled light source for structured illumination and a single camera to extract the information required by the image processing software to calculate the position and orientation of a turbine blade in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent progress on digital radiographic imaging systems with high speed acquisition will allow, in a near future, studies of moving organs such as the beating heart. Even obtained at a high rate, these images remain conventional radiographic images giving inte-grated informations of the organ. We consider a technique using coded sources wl,r,1:e the resulting radiographic coded image would contain all informations necessary for the 3-D dynamic reconstruction of the object. In the recording step the organ is imaged by a planar-array of X-rays sources arranged according to an appropriate code. Sources are flashed simultaneously during a fraction of second. The coded picture results from the superposition of the elementary radiographs that would be obtained with X-rays sources separately flashed. An adapted decoding process of this image allows the spatial reconstruction of the 3-D radiographied organ. In fact, during the reconstruction process artifacts generally occur. These artifacts are due to side-peaks in the cross-correlation between coding and decoding functions. We present here a method allowing perfect decoding of the image within an arbitrarily large region. Founded onthis method a computer simulation is used to decode a 3-D object made of two separated plans. This method is interesting because it may be used for any code of sources even if side peaks occur in the correlation. It can also be used for non perfect decoding of objects more complex than two separated plans. In this case the improvement of the decoded image depends mainly on the cross-correlation properties of the code. In conclusion it is stated that it would be interesting to look for an optimal code producing both good spatial resolution of the object and proper characteristics for the decoding procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional reconstruction techniques in positron emission tomography are based on deterministic principles, i.e., the stochastic nature of the measurements is completely ignored. This approach is quite acceptable in situations where the number of detector counts is sufficiently high. In practice, however, there are limitations to both the amount of activity administered to the patient, and the acquisition time, which usually results in a relatively low number of counts. It is known that the statistical fluctuations of these measurements can give rise to severe reconstruction artefacts. Stochastic methods are preferable in this context, as they use the information contained in the measurements more efficiently. In this paper, we present a modified version of Rockmore and Macovski's maximum-likelihood formula (for Poission measurement statistics), which takes into account the attenuation of the subject. Our implementation, based on an iterative algorithm, produces excellent images with low noise content and virtually no artefacts. Furthermore, the resolution comes close to the theoretical limit imposed by the sampling distance. A suboptimal version of the algorithm is also presented. Comparisons are made between the error performance of both algorithms and that of the filtered backprojection method. In order to make a fair comparison, we include two curves (corresponding to different convolution filters) for the conventional method. The absence of artefacts and lower noise, especially in low intensity areas of the images, are distinct advantages for diagnostic purposes. Alternatively, acceptable recon-structions can be obtained with shorter acquisition times or lower radionuclide doses for the patient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility of analyzing neuroanatomical images (such as in computerized tomography) in a three dimensional form represents a significant improvement in diagnostic accuracy over conventional analysis of 2-D images. Expecially when the task consists in finding correspondences and/or differences between different measures of the same organ, only a 3-D comparison allows reliable results avoiding misregistration errors. In the paper we refer an example of 3-D analysis and representation of cerebral structures in functional stereotactic neurosurgery. This problem is briefly introduced in section 1, with reference to the state of the art and future trends of the medical research. Digital image processing techniques are used to obtain a 3-D description of some diencephalic structures, primarily involved in stereotactic neurosurgery. The input data consist in parallel sections of suitably stained brain slices which are displayed in a stereotactic atlas 11] . The 3-D reconstruction has been accomplished in different steps : at first the contour lines of the cerebral structures have been digitized as a 1-D description of the cerebral structures. Then a contour filling procedure has been implemented to collect these elements in a finite number of regions,labelled by fixed amplitude values (easy discrimination using pseudocolor display).Due to the non-uniform spacing of the atlas sections, three-dimensional interpolation has been performed on such 2-D images. These processing techniques are described in section 2, with examples of application on actual stereotactic data. The main advantage of dealing with neuroanatomical data in 3-D form is the possibility to verify any probe trajector that does not lie in the atlas planes, which is usually the case. Very simple techniques have been developed to evaluate arbitrarily slant sections of the volume as well as to obtain approximate 3-D assonometric display, using standard minicomputer [21 . A brief discussion on these solution is referred in section 3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximal information about the potential distribution across the human chest wall can be obtained by sampling it in an important number of points, distributed according to a pre-defined geometric pattern on the thorax. The classical representation is a set of maps of equipotential lines, taken at different moments of the cardiac cycle. There are important reasons why only specialised cardiologists use this visualisation technique and why it be-longs to the exclusive domain of experimental research: its noise sensitivity and its poor dynamic information content. To overcome these problems, we use a representation of sequences of computer generated images on a video screen, where potential values are mapped to pseudo-colours or gray levels. The use of the visualisation technique as an aid in the study of diagnostic parameters derived from body surface potential maps will be illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Image Processing offers the possibility to realize novel special effects in Computer Aided Art. In this paper image processing methods are described to alter static scenes or to realize image transitions in animation. Transformations include conversion to dot patterns, line patterns and various enhancement and resolution effects. Examples are the production of a standart Dutch post stamp, contributions to an experimental television program presented by the Dutch broadcasting corporation and some poster designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of rate distortion is applied to the problem of image registration in this paper. The reference and sensed images are treated as the two ends of a communication channel with the correlation function of the two images assigned the role of the cost or distortion measure. Then it is shown that rate distortion theory can be used to derive the best possible registration. The significance of this approach is that it leads to the design of optimum matching systems and the evaluation of the performance of those systems that are suboptimum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared thermography has been used, in wind tunnels, to study the transition from laminar to turbulent boundary layer. Different methods have been tested to increase legibility of infrared images, such as signal integration, histogram modifications, colour coding... Some examples of detection of the transition are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an algorithm for three-dimensional display of discrete objects in a discrete space. There are no restrictions on the objects: they may be concave or convex, have holes (including interior holes) and can consist of disjoint parts. The algorithm permits independent and combinative visualization of surfaces of anatomical structures, such as bones or organs, inside the human body. The input for our algorithm is a binary scene derived from a consecutive set of segmented and interpolated computed tomography (CT) cross sections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recognition of patterns independently from their size is a fundamental requirement in many applications, such as in computational or robotic vision. In particular the need of scale-invariant recognition exists when, according to a simplified experimental scheme, the patterns to be recognized are 2-D representations of objects whose distance from the acquisition system is unknown. Several scale-independent recognition systems have already been developed and presented in the literature: some of them are based on iconic, i.e. template matching, techniques, some on syntactic procedures. Quite often, however, the images to be recognized must first go through some kind of preprocessing, as for noise cleaning or edge reinforcement purposes. In such cases one has to operate so that the global processing is still scale-invariant. This requires that the preprocessing must be itself scale-invariant, in the sense that scale changes of the input image are simply translated into scale changes of the output, with no shape distortion taking place. This paper shows that no classical linear shift-invariant filter guarantees such a property and presents the most general class of linear scale-invariant filters. Several interesting subclasses of such (shift-variant) filters are discussed as well as the related implementation problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the construction of several new quantitative microscope systems we have developed techniques for the measurement and calibration of spatial resolution, image sampling densi-ty, signal-to-noise ratio, photometric response, and shading. The approach makes use of several commercially available test specimens, a test slide that was specially developed, and a number of algorithms to provide calibration of the complete system from the input specimen to the stored digital image including optical, electro-optical, analog, and digital sources of signal degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An original image processing procedure that reconstructs the surface topography of a heavy metal-shadowed replica from a digitized micrograph is presented. The theory of this method is established and its limits are discussed. The usefulness and validity of the median filtering algorithm used in the data reduction are shown. The entire procedure has been implemented in a small 16 bit minicomputer. The method has been used to study the rough surfaces of copper, silver, magnesium and gold deposits by autocovariance functions of surface profiles. The results show that good reconstructions of extended surface replicas are obtained. Several new methods of improving the reconstruction process have been tested and are briefly presented. Under good experimental conditions the lateral resolution was 50 A for an overall field size of 40.000 A. This shows that our method may have important applications besides the study of alterable metallic surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of a moving object which is changing its form and light intensity as a function of time is described. The objects presented as an example of such an analysis are solar eruptions (flares) observed with a digital CCD - camera. It is discussed how the information of every picture in the set is transformed into a single parameter in order to obtain time profiles. It is shown how images are calibrated. Remaining noise was eliminated by Fourier - filtering techniques in space and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The varying intensity of the diffraction halo affects the positions of the fringe maxima and minima in speckle photography. Significant fringe shifts appear if this background is not removed prior to fringe spacing evaluation. A logarithm method is used to simplify data acquisition of speckle fringe patterns, so that recording of diffraction halos for each point on the specklegram is not necessary. Experimental results show the accuracy of the processing method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most problems in image processing have structural aspects which are difficult to resolve using general methods. Use of heuristic methods is limited to cases where structural aspects are easy to resolve. An approach for hierarchical processing of images is described whereby structural aspects are resolved simultaneously with the analysis of data values. An effective use of a hierarchical structure puts strong restrictions upon information representation and operations. Information is represented in terms of compatibility of events, combined with a measure of confidence. Operations are of type symmetry operations, which allow data compression, context control and have a good descriptive power. The usefulness of these methods in image analysis and image processing is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ICOS 20000 system is able to process gray-level image data of a high spatial resolution at a high speed. A large number of data reduction and feature extraction algorithms are hardware implemented. This allows the use of the system in a broad range of automatic visual inspection problems and other areas such as robot vision. The essentially modular system architecture makes it possible to tailor the configuration to the specific requirements of a given application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a system achieving the comparison of high definition images (ty-pically 2000 x 2000 pixels) in real-time (i.e. without image storage) in a typical time of 1 second. The image comparator is composed of two devices: the "TRANSLATOR SYSTEM" and the "TEST STATION". The "TRANSLATOR SYSTEM" provides the means to develop a software "REFERENCE IMAGE" during a preliminary "off-line" phasis, using data obtained from CAD systems or from digitized drawings. The "TEST STATION" features the "COMPARATOR SYSTEM" and the "MEASURING HEAD". The first performs the real-time comparison of a "REAL IMAGE" (provided by a high definition linear solid-state sensor) and the corresponding "REFERENCE IMAGE". This unit is controlled by a microprocessor which feeds the hardware "pixel-comparators" with the necessary reference data. Simultaneously, the real image data obtained by scanning the scene are given pixel by pixel to those comparators. The error detection hardware of the "COMPARATOR SYSTEM" is programmable according to the criteria defining the reject conditions (density and/or patterns of erroneous pixels detected) imposed by the user. Registration errors within certain tolerances may occur without hampering the comparison process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A microprocessor based low cost general purpose digital image processing unit is developed amenable for pictures upto 576x768 pixels. A video camera, a teleprinter and a TV picture tube are interfaced to the system. With the use of timer and associated devices, a columnwise sampling is adopted to reduce the speed requirements of A/D. The image memory is organized in four files enabling faster readout to meet the CRT refreshing rate. The system can handle any image processing task suited for still pictures and accepts programs written in machine language for the 8085 CPU. Out of a number of software developed, edge detection, pixel population variation and meansquare error computation are dealt here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the Graz Research Centre, an instruction Library for a deAnza IP 6400 Image Array Processor has been developed. This library provides easy access the image processing facilities and speeds up algorithm implementation considerably. The paper presents the application of these instructions to some image processing tasks and discusses arithmetic problems arising from the use of eight bit operands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main problems of digital image processing is the vast amount of calculations required to implement even the simplest operations. Usually, the time available to implement these operations is also limited, due to the demands of real-time processing or system interactivity. A dedicated hardware implementation of these operations only offers a partial (and temporary) solution, as the field of digital image processing is still changing thoroughly. In this paper we present a digital image processing system satisfying the following main objectives. First, the system posesses a high degree of flexibility, allowing it to execute a broad range of possible applications. This flexibility is obtained by using general purpose microprogrammable logic (such as bit-sliced ALUs and memories). Second, the system is able to execute simple pointwise operations in real-time (20 to 40 ms/image), while most complicated operations on an entire image (such as the calculations of the discrete Fourier transform (DFT)) only require a time of the order of one second (interactive operation). A number of benchmark problems, such as DFT calculations, digital filtering by means of number theoretic transforms and histogram determination, illustrate the possibilities of the system. Third, the system posesses fast and flexible interfaces, allowing it to operate in a great number of different environments. Typically, the system has an interface with a minicomputer. This minicomputer generates macro-commands with a high information contents that are consequently interpreted and executed by the microprogram of the image processor. Additional interfaces allow a high speed data communication with other (possible) image processors, and with the external world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VAP is a novel digital video array processor for local image transforms working at TV pixel rates in conjunction with an image memory having simultaneous but independent read/ write capability. A programmable system of delay lines is used to access 16 selectable pixels within a local neighbourhood. This information is reduced in a pipeline structure similar to a binary tree, the nodes of which are look-up tables with two inputs and one output. Transformations on whole images may thus be iterated at the TV frame rate. Applications include image preprocessing to allow segmentation by texture, edge detection, etc. but also binary image transformations such as erosion, dilatation, thinning and others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modular real-time video processing system has been constructed and is currently being used for angiographic applications. The novel feature of this system is the multiple video bus and the accompanying modular design. The system consists of a number of independent modules ; memory, control, input/output, and arithmetic processors. These modules communicate with one another over a number of video pathways, under the controlof an 8-bit microprocessor. The system architecture and control structure will be described. At present, the system is configured to perform certain operations which have been recognized to be useful for digital subtractive angiography. These operations are carried out for each pixel at video rates ; that is, at 14 x 106 pixels per second. The operations include, video averaging, first order recursive filtering, subtraction, and some non-linear filtering. Examples from clinical trials are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A CMOS programable picture processor*) with its own photodiod array is presented. The circuit is novel in that it includes an instruction set that permits most low-level picture processing operations. The chip is a parallel machine with a processing unit for each picture element. Images are binarized and are processed line by line. An experimental chip has been designed and is under fabrication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cellular-logic operations like erosion, dilation, contour extraction, skeletonization, local majority voting and pepper-and-salt noise removal are essential in processing binary images. We show that these operations, like some homomorphic filters, can be constructed from a 3*3 convolution and non-linear table look-up, features of many image processing systems on the market. The method proposed extends their field of application from grey-value to binary images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One method of picking up of the sectional images of the 3-D object for video transmission is to use an assembly of cameras having their focal planes adjusted to fall in different lateral regions of the object. The image registered in each camera has regional blur due to defocusing of other lateral regions of the object. Two image processing techniques are described to reduce this blur effect. The first technique is based on negative neighborhood averaging on threshold and the second method is based on spatial modulation and filtering,
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When computer vision is used for the real-time control of dynamic systems, one difficulty arises from the fact that TV-cameras integrate the incoming light over a time interval equivalent to one frame period. If an observed object moves fast enough, its image will deteriorate in two ways: the edges will be blurred and the contrast will diminish. This causes conventional edge detectors or thresholding methods to break down. This paper describes a novel algorithm on the basis of nonlinear filtering, that overcomes these difficulties and has succeeded in locating with high accuracy objects, whose images move at speeds of 1000 pixels/sec. Experimental results obtained by simulation and by controlling real objects are reported. A similar algorithm can be used to estimate the velocity of an object from the degree of motion blur of its image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theory of multispectral restoration is applied to the development of a fast approximate procedure for restoration of conventional color photography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast algorithm is presented in this paper to restore non uniform background in astronomical images with starlike or contrasted objects. The digitized image is divided into adjacent squares of equal area of (2n x 2n)pixels. A definition of the local contrast measure is given. For a fixed area the histogram of local contrast measures is established for the whole image. The influence of image characteristics (background structure, extended sources, starlike objects, film defects, etc...) on the shape of this histogram is discussed. It is shown that a threshold may be found on this histogram to select the background areas. From these areas we are able to determine the background level over the entire image. Some typical astronomical applications show that the background is estimated with accuracy and that film defects may be removed without damage. With this algorithm, and unlike the median algorithm, the restored background is not biased even in the neighborhood of the removed objects. However, sharp edges in the background itself are smoothed. Experience shows that this algorithm is easy to use; it has been implemented on a 16 bit minicomputer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computationally efficient Fourier domain maximum a posteriori (MAP) algorithm is derived and implemented for restoration of a class of non-linearly degraded images. The proposed Fourier domain MAP restoration algorithm is shown to be approximately twice as efficient computationally as the extant spatial domain MAP restoration algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visibility of erased inscriptions on Roman stones is improved by illuminating them with a lateral source of light. The optimal angular location and incidence of the source depend on the directional content of the graven letters as well as their degree of erosion. A general multidirectional illumination scheme has been worked out by statistically matching the respective intensities of a set of n primary sources surrounding the stone. The amount of directional information accounted for by each primary source with respect to the corresponding primary image of the stone, disclosed by the source. Multispectral information compression techniques can therefore be applied to the n-dimensional space spanned by the general source. Principal images are computed from the analysis of variance of the directional information in this space. Global or local criteria have to be implemented, depending on degree of erosion of the inscriptions. Full resolution principal images are synthetized by linear combination of photographs of the primary images. Application to the deciphering and reading of Raman inscriptions are presented. The principle of a real time scanner based on the multiplexing properties of white light is introduced as a conclusion. It takes advantage of the achromaticity of the studied objects for coding a spatial frequency scanner upon the chromatic variable. The princi-ple of a generalized source, that illuminates a scene in a pattern recognition context, results from mixing two features : the directional content is statiscally matched to that of the scene, and the chromatic content allows a spatial frequencies sampling
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various methods for image sampling and inter olation are considered. The use of Nyquist sampling is discussed; interpolation of sampled data by the use of pp-functions and B-splines is introduced; and the various methods are compared for the case of image samling. An elementary introduction to B-splines and their applications is given in a manner that lacks rigor but which should appeal to engineers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two methods of segmentation called the Histogram Optimization Segmentation and the Histogram Compression Segmentation are presented. These algorithms use local histograms in a recursive manner to arrive at a self-consistent segmentation. The first two sections explain the specific operation of the algorithms and the last section examines the validity of one of the segmentation criteria. Examples of the segmentations are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A classification algorithm called the Perturbing and Iterating Classifier (PIC) is presented. This algorithm is a heuristic classifier that determines the classification of a segment by examining the number of self-consistent perturbations that are necessary for a segment's descriptor vector to become very close to a model descriptor vector. Unlike many other classifiers, this algorithm does not rely on the initial closeness or similarity of descriptor vectors. The theory of PIC is initially explained, an application of PIC in two dimensional shape matching is given, and then the physical interpretation of the algorithm is presented. An example of how PIC can discriminate shape over a wide range is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.