PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with Proceedings of SPIE Volume 6356, including the Title Page, Copyright information, Table of Contents, Conference Committees listing, and Introduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an original approach for the optimal 3D reconstruction of manufactured workpieces based on a
priori planification of the task, enhanced on-line through dynamic adjustment of the lighting conditions, and built
around a cognitive intelligent sensory system using so-called Situation Graph Trees. The system takes explicitely
structural knowledge related to image acquisition conditions, type of illumination sources, contents of the scene (e. g.,
CAD models and tolerance information), etc. into account. The principle of the approach relies on two steps. First, a socalled
initialization phase, leading to the a priori task plan, collects this structural knowledge. This knowledge is
conveniently encoded, as a sub-part, in the Situation Graph Tree building the backbone of the planning system
specifying exhaustively the behavior of the application. Second, the image is iteratively evaluated under the control of
this Situation Graph Tree. The information describing the quality of the piece to analyze is thus extracted and further
exploited for, e. g., inspection tasks. Lastly, the approach enables dynamic adjustment of the Situation Graph Tree,
enabling the system to adjust itself to the actual application run-time conditions, thus providing the system with a self-learning
capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If the first work relating to the automation of the digitalization of machine elements goes back to approximately 25 years,
the process of digitalization of parts with non-contact sensor remains nevertheless complex. It is not completely solved
today, in particular from a metrological point of view. In this article, we consider the determination of the trajectory
planning within the framework of the control of dimensional and geometrical specifications. The sensor used in this
application is a laser planner scanner with CCD camera oriented and moved by a CMM.
For this purpose, we have focused on the methodology used to determine the best possible viewpoints which will satisfy the digitizing
of a mechanical part. The developed method is based on the concept of visibility: for each facet of a part CAD Model
(STL) a set of orientations, called real visibility chart, is calculated under condition of measurement uncertainties. By
application of several optimisation criteria, the real visibility chart is reduced to create a viewpoint set from which the
path planning is built.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the automation for 3D acquisition concerns objects with simple shape, like mechanical parts. For cultural
heritage artefacts, the process is more complex, and it doesn't exist general solution nowadays. This paper presents a
method to generate a complete 3D model of cultural heritage artefacts. In a first step, MVC is used to solve the view
planning problem. Then, holes remaining in 3D model are detected, and their features are calculated to finish acquisition.
Different post-processing are applied on each view to increase quality of the 3D model. This procedure has been tested
with simulated scanner, before being implemented on a motion system with five degrees of freedom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D modelling is becoming an important research topic for visual inspection in automatic quality control. Through
visual inspection it is possible to determine whether a product fulfills the required specifications or whether it
contains surface or volume imperfections. Although some process such as color analysis can be achieved by 2D techniques, more challenging tasks such as volume inspection of large and complex objects/scenes may require the use of accurate 3D registration techniques. 3D Simultaneous Localization and Mapping has become a very
important research topic not only in the computer vision community for quality control applications but also
in the robotics field for solving problems such as robot navigation and registration of large surfaces. Although
their techniques differ slightly depending on the application, both communities tend to solve similar problems
by means of different approaches. This paper presents a survey of the techniques used by the robotics and
computer vision communities in which every approach has been compared pointing out their pros and cons and
their potential applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to achieve better quality on their products, manufacturers
now use more and more artificial vision systems during their process.
Concerning transparent objects the task is not trivial and requires controlling the whole lighting of the scene. This paper deals with a polarization imaging method and its application to shape measurement of transparent objects. Our aim is to develop a low cost system based on a unique viewpoint and using industrial components. We show how to overcome ambiguities appearing during the measurement process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a model for the automatic generation of image acquisition conditions. It is based on a model for
inspecting specular surfaces. The method of treating vision in adverse conditions consists of increasing system sensitivity
by means of reducing the distance between capture and camera conditions or changing the curve calibration slope of the
capture conditions. It implies knowledge of the conditions in which the the image is to be captured: focus distance,
vision angles, chromaticity and other lighting characteristics, etc. A simulator recreating the conditions of the model and
enabling inspection architectures to be validated rapidly and at a low cost has been developed. The proposed solution
contributes methodology suitable for providing general solutions and could be systematically used to design quality
control vision systems. The tests for generation conditions were developed for inspecting metallic and dielectric surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new efficient method of calibration for catadioptric sensors is presented in this paper. It is based on an accurate measurement of the three-dimensional parameters of the mirror by means of polarization imaging. While inserting a rotating polarizer between the camera and the mirror, the system is automatically calibrated without and calibration patterns. Moreover it permits to relax most of the constraints related to the calibration of the catadioptric systems. We show that contrary to our system, the traditional methods of calibration are very sensitive to misalignment of the camera axis and the symmetry axis of the mirror. From the measurement of three-dimensional parameters, we apply the generic calibration concept to calibrate the catadioptric sensor. The influence of the disturbed measurement of the parameters on the reconstruction of a synthetic scene is also presented. Finally, experiments prove the validity of the method with some preliminary results on three-dimensional reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rough surface relief extraction is generally made by a mechanical method using a tactile sensor or by using
an auto-focus laser sensor. With these sensors we can estimate surface relief from the analysis of a series of
profiles. Since these measurements spend a lot of time, we hope that we can determine the relief by image
processing. Several methods in the field of image processing have been proposed for relief extraction, such
as shape from shading, optical flow, shape from focus and photometric stereovision. Our works are
based on the photometric stereovision. In 1980, Woodham indicated that the relief of a Lambertian surface can
be determined by the exploitation of a photometric model, which takes into account camera and light source
positions according to the plan of surface. The proposed model expresses the gray level on the image according
to the local relief variations. Three images of the same relief obtained under different angles of lighting are used
to reconstruct the surface relief. From the method of Woodham, several important ameliorations have been
proposed by other researchers. But a limit study in section 2.1.3 proves that the above methods worked with
Lambert's model is adapted to the diffuse reflection, but not to the specular reflection.
Thus, we propose another method to extract the relief of rough textured reflecting surface. In the proposed
method, we show that the diffuse and specular components of the acquired images can be decomposed in two
independent components. The diffuse component can be processed by Lambert's model, the specular component
can be processed according to the position knowledge of facets. Finally, section 3 presents the experimental
results obtained by this method, and compares measurement precision with the experimental results obtained
by Lambert's model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modern heavy plate rolling mill can process more than 20 slabs and plates simultaneously. To avoid material
confusions during a compact occupancy and the permanent discharging and re-entering of parts, one must know
the identity and position of each part at every moment. One possibility to determine the identity and position of
each slab and plate is the application of a comprehensive visual-based tracking system. Compared to a tracking
system that calculates the position of a plate based on the diameter and the turns of the transport rolls, a visual
system is not corrupted by a position- and material dependent transmission slip.
In this paper we therefore present a vision-based material tracking system for the 2-dimensional tracking of
glowing material in harsh environment. It covers the production area from the plant's descaler to the pre-stand
of the rolling mill and consists of four independent, synchronized overlapping cameras. The paper first presents
the conceptual design of the tracking system - and continues then with the camera calibration, the determination
of pixel contours, the data segmentation and the fitting & modelling of the objects bodies. In a next step, the
work will then show the testing setup. It will be described how the material tracking system was implemented
into the control system of the rolling mill and how the delivered tracking data was checked on its correctness.
Finally, the paper presents some results. It will be shown that the position of some moving plates was estimated
with a precision of approx. 0.5m. The results will be analyzed and it will be explained where the inaccuracies
come from and how they eventually can be removed. The paper ends with a conclusion and an outlook on future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated visual inspection of metal castings is defined as a quality control task that determines automatically
if a casting deviates from a given set of specifications using visual data. Many research directions in this field
have been exploited, some very different principles have been adopted and a wide variety of algorithms have
been appeared in the literature. However, the developed approaches are tailored to the inspection task, i.e.,
there is no common approach applicable to all cases because the development is an ad hoc process. Additionally,
detection accuracy should be improved, because there is a fundamental trade off between false alarms and miss
detections. For these reasons, we proposed a novel methodology, called Automated Multiple View Inspection,
that uses redundant views of the test object to perform the inspection task. The method is opening up new
possibilities in inspection field by taking into account the useful information about the correspondence between
the different views. It is very robust because in first step it identifies potential defects in each view and in second
step it finds correspondences between potential defects, and only those that are matched in different views are
detected as real defects. In this paper, we review the advances done in this field giving an overview of the
multiple view inspection and showing experimental results obtained on metal castings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design and realization of an imaging system intended for use as a reference method for the
accurate measurement of cotton fiber length. The prototype system is composed of an off-the-shelf scanner that
generates a grayscale image of multiple individualized fibers, followed by customized image processing algorithms that
compute the length of each fiber in the image. Although the system requires some degree of separation between the
individual fibers at scan time, it is shown to produce highly accurate length measurements that are invariant to fiber
orientation, shape, inter-fiber intersections, and intra-fiber crimps and crossovers. Hence, in its present state, the
proposed system serves as an excellent reference method for assessing the efficacy of commercially available length
measurement systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mura quantification method in the mura inspection is reported. Mura is local unevenness of lightness in a uniformly
manufactured surface without clear contour which gives viewers unpleasant sensation. The mura has been inspected by
human inspectors, however, a measurement to quantify the mura strength is expected. We report a method using multilevel
sliced images, which gives an index to evaluate the mura intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a method which detects aspect flaws occurring on the color surfaces of drinking glasses decorated thanks
to an industrial silk-screen process. As the pattern printed on glasses slightly varies between two glasses successively
produced, a simple comparison between a reference image which represents a glass without any flaw and the current image
which contains the glass to be inspected, provides poor results for flaw detection. That's why we propose an original color
image segmentation scheme in order to compare the segmentation of the reference image and those of the current image to
be inspected. This procedure iteratively constructs the pixel classes by histogram multi-thresholding. For this purpose, the
most discriminating color spaces are automatically selected during an off-line supervised learning scheme so that the color
image segmentation is achieved by pixel classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interferometric imaging has the potential to extend the usefulness of optical microscopes by encoding small phase shifts
that reveal information about topology and materials. At the Oak Ridge National Laboratory (ORNL), we have
developed an optical Spatial Heterodyne Interferometry (SHI) method that captures reflection images containing both
phase and amplitude information at a high rate of speed. By measuring the phase of a wavefront reflected off or
transmitted through a surface, the relative surface heights and some materials properties can be measured. In this paper
we briefly review our historical application of SHI in the semiconductor industry, but the focus is on new research to
adapt this technology to the inspection of MEMS devices, in particular to the characterization of motion elements such
as microcantilevers and deformable mirror arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Twenty-five years after the seminal work of Jean Morlet, the wavelet transform, multiresolution analysis, and other space
frequency or space scale approaches are considered standard tools by researchers in image processing, and many
applications have been proposed that point out the interest of these techniques. This paper proposes a review
of the recent published works dealing with industrial applications of wavelet and, more generally speaking,
multiresolution analysis. More than 180 recent papers are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The management of mineral fertilisation using centrifugal spreaders requires the development of spread pattern
characterisation devices to improve the quality of fertiliser spreading. In order to predict the spread pattern deposition
using a ballistic flight model, several parameters need to be determined and especially the velocity of the granules when
they leave the spinning disc. This paper demonstrates that a motion blurred image acquired in the vicinity of the disc
with a low cost imaging system can provide the three dimensional components of the outlet velocity of the particles. A
binary image is first obtained using a recursive linear filter. Then an original method based on the Hough transform is
developed to identify the particle trajectories and to measure their horizontal outlet angles, not only in the case of
horizontal motion but also in the case of three dimensional motion. The method combines a geometric approach and
mechanical knowledge derived from spreading analysis. The outlet velocities are deduced from the outlet angle
measurements using kinematic relationships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of precision agriculture, we present a robust and automatic method based on simulated images for
evaluating the efficiency of any crop/weed discrimination algorithms for a inter-row weed infestation rate. To simulate
these images two different steps are required: 1) modeling of a crop field from the spatial distribution of plants (crop and
weed) 2) projection of the created field through an optical system to simulate photographing. Then an application is
proposed investigating the accuracy and robustness of crop/weed discrimination algorithm combining a line detection
(Hough transform) and a plant discrimination (crop and weeds). The accuracy of weed infestation rate estimate for each
image is calculated by direct comparison to the initial weed infestation rate of the simulated images. It reveals an
performance better than 85%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of precision agriculture, we have developed a machine vision system for a real time precision
sprayer. From a monochrome CCD camera located in front of the tractor, the discrimination between
crop and weeds is obtained with an image processing based on spatial information using a Gabor filter.
This method allows to detect the periodic signals from the non periodic one and it enables to enhance
the crop rows whereas weeds have patchy distribution. Thus, weed patches were clearly identified by a blob-coloring method. Finally, we use a pinhole model to transform the weed patch coordinates image in world coordinates in order to activate the right electro-pneumatic valve of the sprayer at the right moment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simple representation of complex 3D data sets is a fundamental problem in computer vision. From a quality
control perspective, it is crucial to use efficient and simple techniques do define a reference model for further
recognition or comparison tasks. In this paper, we focus on reverse engineering 3D data sets by recovering
rational supershapes to build an implicit function to represent mechanical parts. We derive existing techniques
for superquadrics recovery to the supershapes and we adapt the concepts introduced for the ratioquadrics to
introduce the rational supershapes. The main advantage of rational supershapes over standard supershapes is
that the radius is now expressed as a rational fraction instead of sums and compositions of powers of sines and
cosines, which allows simpler and faster computations during the optimization process. We present reconstruction
results of complex 3D data sets that are represented by an implicit equation with a small number of parameters that can be used to build an error measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an innovative way to simultaneously perform feature extraction and classification for
the quality control issue of surface grading by applying two well known multivariate statistical projection tools
(SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual
appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and
predictions computed from the extracted projection models.
Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This
image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental
design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors
include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification
approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to
compute accuracy estimates and study the factors effect.
The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These
results outperform those obtained in a previous work where the soft color texture descriptors in combination
with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance
Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize
code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer
vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The characterisation and posterior detection of speckle noise in ultrasound (US) has been regarded as an important
research topic in US imaging, mainly focusing on two specific applications: improving signal to noise
ratio by removing speckle noise distribution and, secondly, detecting fully developed speckle patterns in order to
perform a 3D reconstruction using only image content information from freehand sensorless images.
The main novelty of this work is to show that speckle detection can be improved based on finding optimally
discriminant low order speckle statistics. We describe a fully automatic method for speckle detection and propose
and validate a framework to be efficiently applied to real B-scan data, not being published to date. Different
experiments have been carried out in order to validate the speckle detection methodology using both real and
simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surgical operations of shoulder joint are guided by various principles: osteosynthesis in the case of fracture,
osteotomy in order to correct a deformation or to modify the functioning of the joint, or implementation of articular
prosthesis. At the end of the twentieth century, many innovations in the domains of biomechanics and orthopedic
surgery have been performed. Nevertheless, theoretical and practical problems may appear during the operation (visual
field of surgeon is very limited, quality and shape of the bone is variable depending on the patient). Biomechanical
criteria of success are defined for each intervention. For example, the installation with success of prosthetic implant will
be estimated according to the degree of mobility of the new articulation, the movements of this articulation being
function of the shape of the prosthesis and of its position on its osseous support. It is not always easy to optimize the
preparation of the surgical operation for every patient, and a preliminary computer simulation would allow helping the
surgeon in its choices and its preparation of the intervention. The techniques of virtual reality allow a high degree of
immersion and allow envisaging the development of a navigation device during the operating act.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to
locate object boundaries. Problems associated with initialization and poor convergences to boundary concavities have
aroused, which restricts their utility. This paper presents a new approach to deal with the defects contours estimation
problem in radiographic images using parametric active contours. In this approach we exploit the performance of the
GVF as external force and enhance it by joining to it an external adaptive pressure forces which speeds up to the snake
progression, makes it less sensitive to initialization and provides capability of tracking the concavities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the method for object fingerprinting in the context of element specific x-ray imaging. In
particular, the use of spectral descriptors that are illumination invariant and viewpoint independent for pattern
identification was examined in some detail. To improve generating the relevant "signature", the spectral descriptor
constructed is enhanced with a differentiator which has built-in noise filtration capability and good localisation
properties, thus facilitating the extraction of element specific features at a coarse-grained level. In addition to the
demonstrable efficacy in identifying significant image intensity transitions that are associated with the underlying
physical process of interest, the method has the distinct advantage of being conceptually simple and computationally
efficient. These latter properties allow the descriptor to be further utilised by an intelligent system capable of performing
a fine-grained analysis of the extracted pattern signatures. The performance of the spectral descriptor has been studied in
terms of the quality of the signature vectors that it generated, quantitatively based on the established framework of
Spectral Information Measure (SIM). Early results suggested that such a multiscale approach of image sequence analysis
offers a considerable potential for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics performs personal authentication using individual bodily features including fingerprints, faces, etc. These
technologies have been studied and developed for many years. In particular, fingerprint authentication has evolved over many years, and fingerprinting is currently one of world's most established biometric authentication techniques.
Not long ago this technique was only used for personal identification in criminal investigations and high-security
facilities. In recent years, however, various biometric authentication techniques have appeared in everyday applications.
Even though providing great convenience, they have also produced a number of technical issues concerning operation.
Generally, fingerprint authentication is comprised of a number of component technologies: (1) sensing technology for
detecting the fingerprint pattern; (2) image processing technology for converting the captured pattern into feature data
that can be used for verification; (3) verification technology for comparing the feature data with a reference and
determining whether it matches. Current fingerprint authentication issues, revealed in research results, originate with
fingerprint sensing technology. Sensing methods for detecting a person's fingerprint pattern for image processing are
particularly important because they impact overall fingerprint authentication performance. The following are the current
problems concerning sensing methods that occur in some cases: Some fingers whose fingerprints used to be difficult to
detect by conventional sensors. Fingerprint patterns are easily affected by the finger's surface condition, such noise as
discontinuities and thin spots can appear in fingerprint patterns obtained from wrinkled finger, sweaty finger, and so on.
To address these problems, we proposed a novel fingerprint sensor based on new scientific knowledge. A
characteristic of this new method is that obtained fingerprint patterns are not easily affected by the finger's surface
condition because it detects the fingerprint pattern inside the finger using transmitted light.
We examined optimization of illumination system of this novel fingerprint sensor to detect contrasty fingerprint pattern
from wide area and to improve image processing at (2).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces an algorithm dedicated to the detection of the axes of cylindrical objects in a 3-D block. The proposed algorithm performs the 3-D axis detection without prior segmentation of the block. This approach is specifically appropriate when the grey levels of the cylindrical object are not homogeneous and thus difficult to distinguish from the background. The method relies on gradient and curvature estimation and operates in two main steps. The first one selects candidate voxels for the axis and the second one refines the determination of the axis of each cylindrical object. Applied to fiber reinforced composite materials, this algorithm allows detecting the axes of fibers in order to obtain the geometrical characteristics of the reinforcement. Knowing the reinforcement characteristics is an important issue in the quality control of the material but also in the prediction of the thermal and mechanical behavior. In this paper, the various steps of the algorithm are detailed. Then, results obtained with synthetic blocks and with blocks acquired by synchrotron X-ray microtomography on actual carbon-fiber reinforced carbon (C/C) composites are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, a new implementation of active curves algorithms is proposed. It is question of an active region algorithm
based on stationary states of a nonlinear diffusion principle. Its originality is to obtain a set of geometric envelopes in
one pass, with a correspondence between level threshold of the grayscale result and a regularity scale, close to the
original shape. This set of geometric envelopes gives a multiscale representation, from a very regular approximation to a
full detailed and roughest representation. This property is used in a new subpixel circle center estimator developed in the
purpose of distorted contours. Results are very promising as precision is noticeably improved compared to a least mean
squares estimator. This estimator is then pretty well adapted to limit distortions caused by industrial processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method for quantifying the design of automotive frontal view based on the research on the human visual
impression to the facial expression. We have researched to evaluate the automotive frontal face by using the facial
words and the perceived age. Then we verified experimentally how effectively the line drawing image could work and
coche-PICASSO image could be used for the image stimulation. As a result of this paper, a part of the facial words
could be strongly correlated to both the facial expressions and the perceived age in the line drawing image. Besides, it
was also known that the perceived age in the coche-PICASSO image was always younger than those of the line drawing image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An architecture for fast video object recognition is proposed. This architecture is based on an approximation of featureextraction
function: Zernike moments and an approximation of a classification framework: Support Vector Machines
(SVM). We review the principles of the moment-based method and the principles of the approximation method:
dithering. We evaluate the performances of two moment-based methods: Hu invariants and Zernike moments. We
evaluate the implementation cost of the best method. We review the principles of classification method and present the
combination algorithm which consists in rejecting ambiguities in the learning set using SVM decision, before using the
learning step of the hyperrectangles-based method. We present result obtained on a standard database: COIL-100. The results are evaluated regarding hardware cost as well as classification performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method to detect moving objects by the background subtraction using the normalized correlation
matching. The normalized correlation matching is known as one of general-purposed template matching methods. And
the method is robust against change of brightness. Therefore, it is expected that the stable detection of moving objects
will be performed by using the normalized correlation matching against changing brightness of background. The
proposed method regards the background image as the template image and evaluates correlation rates between the
background image and the scene image in order to extract moving objects. We also adopt the integration technique of the
correlation rate to realize more stable detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality testing of optics which is used in laboratories is one of the most important tasks and many procedures are proposed and used. These testing procedures are based on measurement of reflecting laser wavefront from optical component surfaces. By using the Shack-Hartmann method, we can measure a simple curved laser beam wavefront. For achieving this, firstly we reduce optical noise which may disturb our optical data. We improved peak location and sum location algorithms to introduce a simple new algorithm, based on adaptive thresholding. The proposed algorithm scans the image to identify the approximate location of focal spots by looking for local optical centers on CCD screen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper has mainly discussed about two problems, object focusing and depth measurement. First, we propose a novel and robust scheme of image focusing by introducing a new measure of focusing based on Orientation code matching. A new evaluation function, named Complemental pencil volume, CPV, is defined and calculated to represent local sharpness of images, either in or out of focus, by comparing the similarity between any patterns extracted at the same position within their own scenes. An identified and unique maximum or peek, which of ill-condition scenes with low contrast observations. Experiments show that the OCM-based focusing is very robust to change in brightness, and to even more irregularities in the real imaging system, like dark condition. Second, based on this robust focusing technique, we applied it to an image sequence of an object surface to measure the depth of profile. A simple plane object surface has been implemented to demonstrate the basic approach. The results showed the successful and precision depth measurement of this object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with image quality assessment. This field plays nowadays an important role in various image
processing applications. Number of objective image quality metrics, that correlate or not, with the subjective
quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with
full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced
to an image with regards to the reference. No-reference approach attempts to model the judgment of image
quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical
models established on psychophysical experimentation are generally used. In this paper, we focus only on the
second category to evaluate the quality of color reproduction where a blind metric, based on human visual system
modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, many evaluation methods for food industry by using image processing are proposed. These methods are
becoming new evaluation method besides the sensory test and the solid-state measurement that have been used for the
quality evaluation recently. The goal of our research is structure evaluation of sponge cake by using the image
processing.
In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble
structure is one of the important properties to understand characteristics of the cake from the image. In order to take the
cake image, first we cut cakes and measured that's surface by using the CIS scanner, because the depth of field of this
type scanner is very shallow. Therefore the bubble region of the surface has low gray scale value, and it has a feature
that is blur. We extracted bubble regions from the surface images based on these features. The input image is binarized,
and the feature of bubble is extracted by the morphology analysis.
In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory
test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that
our method is as well as the subjectivity evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For long years, image processing researchers pay tremendous effort to develop visual inspection systems. To
develop the visual inspection systems requires long and severe effort of image processing experts. But in many cases,
human operators can detect the defects, very easily. Here, the visual inspection system with human operator's sensitivity
simulator is discussed. To develop the simulator, the human sensitivity model for visual inspection should be
accomplished. To have the model, several experiments to evaluate the human sensitivity have been required. In this
paper, some experiences to evaluate the human sensitivity have been introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years non-negative factorization (NMF) methods of a reduced image data representation attracted the
attention of computer vision community. These methods are considered as a convenient part-based representation
of image data for recognition tasks with occluded objects. In the paper two novel modifications of the NMF
are proposed which utilize the matrix sparseness control used by Hoyer. We have analyzed the influence of
sparseness on recognition rates (RR) for various dimensions of subspaces generated for two image databases. We
have studied the behaviour of four types of distances between a projected unknown image object and feature
vectors in NMF-subspaces generated for training data. For occluded ORL face data, Euclidean and diffusion
distances perform better than Riemannian ones, not following the overall expactation that Euclidean metric is
suitable only for orthogonal basis vectors. In the case of occluded USPS digit data, the RR obtained for the
modified NMF algorithm show in comparison to the conventional NMF algorithms very close values for all four
distances over all dimensions and sparseness constraints. In this case Riemannian distances provide higher RR
than Euclidean and diffusion ones. The proposed modified NMF method has a relevant computational benefit,
since it does not require calculation of feature vectors which are explicitly generated in the NMF optimization
process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a Cellular Nonlinear Network (CNN) ruled by reaction-diffusion equations for quality control by
artificial visual inspection. We show that, using a specific nonlinearity allows to extract regions of interest in
a noisy and weakly contrasted image without any processing time setting. We finally present the electronic
realization of an elementary cell of the CNN for a possible electronic integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Defects detection on images is a current task in quality control and is often integrated in partially or fully
automated systems. Assessing the performances of defects detection algorithms is thus of great interest. However,
being application and context dependent, it remains a difficult task. This paper describes a methodology to
measure the performances of such algorithms on large size images in a semi-automated defect inspection situation.
Considering standard problems occuring on real cases, a comparison of typical performance evaluation methods
is made. This analysis leads to the construction of a simple and practical ROC-based method. This algorithm
extends the pixel-level ROC analysis to an object-based approach by dilating the ground-truth and the set of
detected pixels before calculating true positive and false positive rates. These dilations are computed thanks
to the a priori knowledge of a human defined ground-truth and gives to true positive and false positive rates
more consistent values in the semi-automated inspection context. Moreover, dilation process is designed to be
automatically suited to the objects shape in order to be applied on all types of defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article suggests an algorithm based on information deduced from a pair of wide baseline (or sparse view)
stereo images to enhance the accuracy of camera rotation angles detected using inaccurate sensors. The so-called JUDOCA operator; a fast junction detector, is used to extract important interest points. Through the
output information from that operator, affine transformation is then estimated and employed to guide a variance
normalized correlation process in order to get a set of possible matches. The so-called RANSAC scheme is used
to estimate the fundamental matrix; hence, the essential matrix can be estimated and SVD decomposed. In
addition to a translation vector, this decomposition results in an accurate rotation matrix with accurate rotation
angles involved. Mathematical derivation is done to extract and express angles in terms of different rotation
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new photogrammetric approach to automatically reconstruct and measure the imprecision or
deformations of metallic parts composed of curved edges and circular holes. This approach uses images provided by a
CCD camera moving around the part. The main purpose of the approach is to reconstruct circular holes and curved
edges automatically and accurately. For that, the solution uses data from the computer aided design model (CAD) and
information extracted from the images. Experimental results on several parts present the precision and robustness of the
process. They show that the proposed approach has a promising potential in automatic 3D control of industrial parts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study tries to connect the poultry food behavior to the visual and tactile characteristics of the food. The aim of the
work is to make it possible to control the visual and tactile aspects of food (food pellets), by means of image analysis.
These aspects are often suspected to explain the undesirable behavior of the poultries, which can reject a food, showing
however optimal nutritional characteristics. These incidents involve important negative consequences as well for the
animal as for the poultry breeder, with a major degradation of the technical and economic performances. Many
zootechnical studies and observations in breeding testify to the sensitivity of the poultries to the visual and tactile aspects
of food, but measurements classically used to characterize them do not allow explaining this phenomenon. Color, texture
and shape features extracted from images of pellets will constitute effective and practical measures to describe their
visual and tactile aspects. We show that a pellets classification based on visual features and supervised by a set of poultry
food behavior labels allows to select a set of discriminating features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The starting point for all successful system development is the simulation. Performing high level simulation of a system
can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and
evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm
acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in
software and provides the necessary support to read and display image sequences as well as video files. The user can use
the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the
prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software
prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected
with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the
migration to FPGA accelerator platform, and it is distributed for academic purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented reality is used to improve color segmentation on human's body or on precious no touch artefacts. We
propose a technique based on structured light to project texture on a real object without any contact with it. Such
techniques can be apply on medical application, archeology, industrial inspection and augmented prototyping.
Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By
projecting a light pattern onto the surface of an object and capturing images with a camera, a large number of
correspondences can be found and 3D points can be reconstructed by means of triangulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When observing an object horizontally at a long distance, degradations due to atmospheric turbulence often
occur. In our previous work, we tried different methods to get rid of these degradations on infrared sequences.
We showed that the Wiener filter applied locally on each frame of a sequence allows to obtain good results in
terms of edges, while the regularization by the Laplacian operator applied in the same way provides good results
in terms of noise removal in uniform areas. In this article, we try to combine the results of these two methods
in order to obtain a better restoration image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new method for improving improper exposure images. Disappearance of color information and
deterioration of brightness and contrast are occurred in images when images are taken under a bad condition such as
backlights. Generally, improper exposure images have the feature that the color difference is small, and color frequency
of them is larger than that of the proper exposure image. In this method, this feature is extracted by using a new color
histogram space based on human perception, and improve over or under exposure images visibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes a new method and approch of texture characterization. Using complex network representation of an image, classical and derived (hierarchical) measurements, we presente how to have good performance in texture classification. Image is represented by a complex networks: one pixel as a node. Node degree and clustering coefficient, using with traditional and extended hierarchical measurements, are used to characterize "organisation" of textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, vision systems for robots had been widely applied to many important applications. But 3-D vision systems for industrial uses should face to many practical problems. Here, a vision system for bio-production has been introduced.
Clone seedlings plants are one of the important applications of biotechnology. Most of the production processes of clone seedlings plants are highly automated, but the transplanting process of the small seedlings plants cannot be automated because the shape of small seedlings plants are not stable and in order to handle the seedlings plants it is required to observe the shapes of the small seedlings plants. In this research, a robot vision system has been introduced for the transplanting process in a plant factory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.