PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Fabio Remondino,1 Mark R. Shortis,2 Jürgen Beyerer,3 Fernando Puente León4
1Fondazione Bruno Kessler (Italy) 2RMIT Univ. (Australia) 3Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung and Karlsruhe Inst. of Technology (Germany) 4Karlsruher Institut für Technologie (Germany)
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879101 (2013) https://doi.org/10.1117/12.2028778
This PDF file contains the front matter associated with SPIE Proceedings Volume 8791, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879103 (2013) https://doi.org/10.1117/12.2020484
This paper describes a strategy for accurate robot calibration using close range photogrammetry. A 5-DoF robot has been designed for placement of two web cameras relative to an object. To ensure correct camera positioning, the robot is calibrated using the following strategy. First, a Denavit-Hartenberg method is used to generate a general kinematic robot model. A set of reference frames are defined relative to each joint and each of the cameras, transformation matrices are then produced to represent change in position and orientation between frames in terms of joint positions and unknown parameters. The complete model is extracted by multiplying these matrices. Second, photogrammetry is used to estimate the postures of both cameras. A set of images are captured of a calibration fixture from different robot poses. The camera postures are then estimated using bundle adjustment. Third, the kinematic parameters are estimated using weighted least squares. For each pose a set of equations are extracted from the model and the unknown parameters are estimated in an iterative procedure. Finally these values are substituted back into the original model. This final model is tested using forward kinematics by comparing the model’s predicted camera postures for given joint positions to the values obtained through photogrammetry. Inverse kinematics is performed using both least squares and particle swarm optimisation and these techniques are contrasted. Results demonstrate that this photogrammetry approach produces a reliable and accurate model of the robot that can be used with both least squares and particle swarm optimisation for robot control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879104 (2013) https://doi.org/10.1117/12.2020899
The paper presents a metric investigation of the Fuji FinePix Real 3D W1 stereo photo-camera. The stereo-camera uses a synchronized Twin Lens-CCD System to acquire simultaneously two images using two Fujinon 3x optical zoom lenses arranged in an aluminum die-cast frame integrated in a very compact body. The nominal baseline is 77 mm and the resolution of the each CCD is 10 megapixels. Given the short baseline and the presence of two optical paths, the investigation aims to evaluate the accuracy of the 3D data that can be produced and the stability of the camera. From a photogrammetric point of view, the interest in this camera is its capability to acquire synchronized image pairs that contain important 3D metric information for many close-range applications (human body parts measurement, rapid prototyping, surveying of archeological artifacts, etc.). Calibration values - for the left and right cameras - at different focal lengths, derived with an in-house software application, are reported together with accuracy analyses. The object coordinates obtained from the bundle adjustment computation for each focal length were compared to reference coordinates of a test range by means of a similarity transformation. Additionally, the article reports on the investigation of the asymmetrical relative orientation between the left and right camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Luigi Barazzetti, Alberto Giussani, Fabio Roncoroni, Mattia Previtali
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879106 (2013) https://doi.org/10.1117/12.2019997
This paper presents the use of laser tracking technology for structure monitoring. In this field the use of this precise instrument is innovative and therefore new investigations are needed for civil structures, especially for applications carried out during unstable environmental conditions. On the other hand, as laser trackers are today very used in industrial applications aimed at collecting data at high speed with precisions superior to ±0.05 mm, they seem quite promising for those civil engineering applications where numerous geodetic tools, often coupled with mechanical and electrical instruments, are usually used to inspect structure movements. This work illustrates three real civil engineering monitoring applications where laser tracking technology was used to detect object movements. The first one is a laboratory testing for the inspection of a beam (bending moment and shear). The second experiment is the stability inspection of a bridge. The last experiment is one of the first attempts where laser trackers tried to substitute traditional high precision geometric leveling for monitoring an important historical building: the Cathedral of Milan. The achieved results, pro and contra along with some practical issues are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879107 (2013) https://doi.org/10.1117/12.2020472
Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879108 (2013) https://doi.org/10.1117/12.2019985
Nowadays several thermal cameras capture images based on a pinhole camera model. This paper shows how multiple images of flat-like objects or 3D bodies can be mapped and mosaicked with a mathematical formulation between image and object spaces. This work demonstrates that both geometric and radiometric parts need proper mathematical models that allow the user to obtain a global product (orthophotos or 3D models) where accurate and detailed photogrammetric models and thermal images are registered in order to combine geometry and thermal information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879109 (2013) https://doi.org/10.1117/12.2021512
Automated close-range photogrammetric network orientation has been traditionally performed with the use of coded targets in the object space to allow for initial point correspondence determination and subsequent network orientation. Feature-based matching (FBM) techniques have recently offered an alternative procedure for point correspondence calculation between image pairs. FBM algorithms, however, do not come free of complications. Due to the way that FBM considers point correspondences based on the similarity of feature descriptors, a considerable number of mismatches (outliers) can be anticipated, especially with increasing angles of convergence between images. For the critical component of initial Relative Orientation, it is essential that outliers are detected and largely removed from the matched point data. This paper reports on the application of a machine-based learning approach to outlier detection in FBM. The method of Support Vector Regression is evaluated and compared to other outlier removal algorithms for cases of convergent image configurations. Various experimental tests were conducted in controlled networks and with other real datasets using the ‘Identifying point correspondences by Correspondence Function’ (ICF) algorithm, employing different SVR kernel functions. The paper also reports on optimisations made to achieve better results when highly convergent imaging geometries are adopted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910B (2013) https://doi.org/10.1117/12.2019468
The modeling of real-world scenarios through capturing 3D digital data has been proven applicable in a variety of industrial applications, ranging from security, to robotics and to fields in the medical sciences. These different scenarios, along with variable conditions, present a challenge in discovering flexible appropriate solutions. In this paper, we present a novel approach based on a human cognition model to guide processing. Our method turns traditional data-driven processing into a new strategy based on a semantic knowledge system. Robust and adaptive methods for object extraction and identification are modeled in a knowledge domain, which has been created by purely numerical strategies. The goal of the present work is to select and guide algorithms following adaptive and intelligent manners for detecting objects in point clouds. Results show that our approach succeeded in identifying the objects of interest while using various data types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910C (2013) https://doi.org/10.1117/12.2020254
The data obtained by 3D scanners with required higher accuracy and density contain disturbing noise, this noise makes the data processing, mainly by means of triangulated irregular networks using automated procedures, more complicated. The paper presents a new method of noise reduction based on natural redundancy of continuous objects and surfaces where, however, some deformation of the object shape occurs. The method involves a gradual choice of a selected number of the nearest points for each point of a scan, a selected surface is fitted with them and by the elongation or shortening of a beam with a given horizontal direction and the zenith angle onto the intersection with the surface a new (smoothed) position of the points is obtained. As the surface for fitting are used plane, polynomials of 2nd, 3rd and 4th degree. For the better calculation stability Chebyshev bivariant orthogonal polynomials are used. These surfaces are complemented by method using the mean. The solution of surface fitting may apply the least squares method with uniform weights or weights depending on distance, but also a robust method – the minimisation of the sum of absolute values of corrections (L1 norm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910D (2013) https://doi.org/10.1117/12.2020537
In this paper we focus on integrating multi-resolution data from different range sensors into complete 3D model. To simplify the process of building high-resolution model, we propose to create hierarchical structure of data, which contains measurements collected with both time of flight scanner and structured light projection system. Two approaches of view integration are compared with each other to formulate conclusion if combining data from different range sensors allows to improve integration process or leads to data redundancy. According to the first approach, data at higher resolution are mapped on those at lower resolution according to interest features extracted in the datasets. Interest points are calculated with Harris detector on base of curvature and texture (when available) and described with designed descriptors. Second approach assumes that only data at the highest resolution can be used and no reference is involved in integration process. At the end, the summarizing remarks are formulated according to the test results conducted on real 3D measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910F (2013) https://doi.org/10.1117/12.2020514
Within the paper, we present an approach for the alignment of point clouds collected by the RGB-D sensor Microsoft
Kinect, using a MEMS IMU and a coarse 3D model derived from a photographed evacuation plan. In this approach, the
alignment of the point clouds is based on the sensor pose, which is computed from the analysis of the user’s track,
normal vectors of the ground points, and the information extracted from the coarse 3D model. The user’s positions are
derived from a foot mounted MEMS IMU, based on zero velocity updates, and also the information extracted from a
coarse 3D model. We will then estimate the accuracy of point cloud alignment using this approach, and discuss about the
applications of this method in indoor modeling of buildings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-based Reconstruction, Tracking and Monitoring
Mark R. Shortis, Mehdi Ravanbakskh, Faisal Shaifat, Euan S. Harvey, Ajmal Mian, James W. Seager, Philip F. Culverhouse, Danelle E. Cline, Duane R. Edgington
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910G (2013) https://doi.org/10.1117/12.2020941
Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910H (2013) https://doi.org/10.1117/12.2020464
The article presents an innovative methodology for the 3D surveying and modeling of floating and semi-submerged objects. Photogrammetry is used for surveying both the underwater and emerged parts of the object and the two surveys are combined together by means of special rigid orientation devices. The proposed methodology is firstly applied to a small pleasure boats (approximately 6 meters long) - hence a free floating case - and then to a large shipwreck (almost 300 meters long) interested by a 52 m long leak at the waterline. The article covers the entire workflow, starting from the camera calibration and data acquisition down to the assessment of the achieved accuracy, the realization of the digital 3D model by means of dense image matching procedures as well as deformation analyses and comparison with the craft original plane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910I (2013) https://doi.org/10.1117/12.2021015
Central to our investigation is determination of dynamic behaviour of a highly reflective platform floating on water, as well as derivation of parameters defining instantaneous water state. The employed imaging setup consists of three off-the-shelf dSLR cameras capable of video recording at a 30Hz frame rate. In order to observe a change, the non-rigid and non-diffuse bodies impose the adoption of artificial targetting and custom measurement algorithms. Attention will be given to an in-house software tool implemented to carry out point measurement, correspondence search, tracking and outlier detection methods in the presence of specular reflections and a multimedia scene. A methodology for retrieval of wave parameters in regular wave conditions is also automatically handled by the software and will be discussed. In the context of performed measurements and achieved results, we will point out the extent to which consumer grade camera can fulfil automation and accuracy demands of industrial applications and the pitfalls entailed. Lastly, we will elaborate on visual representation of computed motion and deformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910J (2013) https://doi.org/10.1117/12.2020922
The article reports the development of an off-line low-cost videogrammetric system for measuring six degrees of freedom (6DOF) of scaled models in ship model basin. Sub-millimeter accuracy is required to measure the floating rigid body movements. To meet this requirement, in depth analyses, exposed in this paper, are performed to choose the most appropriate number of cameras, their configuration and a proper technique for camera synchronization. The proposed system, composed of three consumer-grade High Definition (Full HD) video cameras, is used to record interlaced video sequences at a frequency of 50 frames per second. A special device which emits simultaneously sounds at known frequency and flashes a LED is used to introduce a common event used for an automatic a-posteriori synchronization of video sequences up to 1 msec. The video sequences are synchronized using matching procedures based on cross correlation between audio signals recorded by camcorders. The ship model is targeted with retro illuminated (LEDs) targets whose positions in the ship reference frame are also measured with photogrammetry. The 6DOF of the ship model are estimated on the basis of rigid transformations computed through the image sequences with the tracked active targets. An error analysis is performed with the assumption of the rigid body using the target coordinates known with photogrammetry. The measured synchronization error is used to correct the image trajectories of tracked points. An improvement of the accuracy of a factor 5 was observed for the trial with highest velocity of tracked points (up to 0.35 m/s).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910K (2013) https://doi.org/10.1117/12.2020510
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910L (2013) https://doi.org/10.1117/12.2020425
Recently microscopic understanding of individual pedestrian behavior in public space is becoming significant. Observation data from diverse sensors have increased. Meanwhile some simulation models of human behavior have made progress. This paper proposes a method of multiple human tracking under the complex situations by integrating the various observation data and the simulation. The key concept is that the multiple human tracking can be regarded as stochastic process modeling. A data assimilation technique is employed as the stochastic process modeling. The data assimilation technique consists of observations, forecasting and filtering. For the modeling, a state vector is defined as an ellipsoid and its coordinates, which are human positions and shapes. An observation vector is also defined as observations from stereo video camera, namely color and range information. Then a system model which represents dynamics of the state vectors is formulated by using discrete choice model. The discrete choice model decides the next step of each pedestrian stochastically and deals with interaction between pedestrians. An observation model is also formulated for the filtering step. The likelihood of color is modeled based on color histogram matching, and one of range is calculated by comparing between the ellipsoidal model and observed 3D data. The proposed method is applied to the data acquired at the ticket gate of a station and the high performance of the method is confirmed. We compare the results with other models and show the advantage of integrating the behavior model to the tracking method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910M (2013) https://doi.org/10.1117/12.2021037
Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of
internationally recognized standard requirements for metrological parameters able to identify the capability of capturing
a real scene. For this reason several national and international organizations in the last ten years have been developing
protocols for verifying such performance.
Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of
mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on
laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of
the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy
and resolution.
Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy.
By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the
theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived
parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected,
frequency domain parameters, etc.).
This paper shows theoretical aspects and experimental results of some novel characterization methods applied to
different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910N (2013) https://doi.org/10.1117/12.2020299
Exposed natural surfaces such as landslides, stream beds and fault scarps can provide us with valuable insight into
natural processes and their interaction with the Earth’s surface. By studying the texture left behind on geological media,
we can improve our models for natural processes and our estimation of risk. Research on the surface morphology of
natural materials has been substantially aided in the past decade through the application of remote geodetic data
collection methods including Light Detection and Ranging (LiDAR) which provides high resolution surface geometry
information. Terrestrial LiDAR scanning (TLS) instruments are particularly suited to geological targets due to portability
and high measurement rates. It has long been understood that natural surface roughness is a scale variant phenomenon.
Therefore, accurate modeling of the processes responsible for its generation relies upon accurate morphological
information at the scales under study, without contamination of the data by other morphological scales. Empirical
analysis of the application of TLS to the task of natural surface roughness estimation has indicated that the standard
deviation of surface heights orthogonal to a local planar datum, a commonly employed descriptor of roughness, lacks
stationarity across changes in scan parameters and target scene geometry. A scale dependent bias resulting from
underestimation of surface asperity heights has been found to reduce measured roughness by over 20% of its expected
value. In order to minimize biases imposed on estimated roughness values by scale dependent aspects of the TLS data
collection process multiresolution analysis is applied. A two-dimensional discrete wavelet transform extracts surface
height information present at distinct scales within the data. Roughness is estimated from the reconstructed dataset, with
high frequency noise removed and low frequency surface topography preserved. Using this approach, results show that
surfaces may be compared on the basis of smallest acceptable common textural wavelength and roughness at scales
appropriate to the phenomena being modeled can be isolated and estimated with enhanced accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910P (2013) https://doi.org/10.1117/12.2020500
Fringe projection is an established method to measure the 3D structure of macroscopic objects. To achieve both a high accuracy and robustness a certain number of images with pairwise different projection pattern is required. Over this sequence it is necessary that each 3D object point corresponds to the same image point at every time. This situation is no longer given for measurements under motion. One possibility to solve this problem is to restore the static situation. Therefore, the acquired camera images have to be realigned and secondly, the degree of fringe shift has to be estimated. Furthermore, there exists another variable: change in lighting. The compensation of these variances is a difficult task and could only be realized with several assumptions, but it has to be approximately determined and integrated into the 3D reconstruction process. We propose a method to estimate these lighting changes for each camera pixel with respect to their neighbors at each point in time. The algorithms were validated on simulation data, in particular with rotating measurement objects. For translational motion, lighting changes have no severe effect in our applications. Taken together, without using high-speed hardware our method results in a motion compensated dense 3D point cloud which is eligible for three-dimensional measurement of moving objects or setups with sensor systems in motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910Q (2013) https://doi.org/10.1117/12.2021461
This work represents an investigation of the possibility to use a hexapod system for optical microscopy investigation and measurements. An appropriate hexapod stage has been developed. The stage has been calibrated and used for several different optical microscopy applications. The construction of the stage is based on the classic Stewart platform and thus represents a parallel robot with 6 degree of freedom. Appropriate software is controlling the transformation of the 3 position coordinates of the moving plate and the 3 Euler angles in position velocities and accelerations of the plate motion. An embedded microcontroller is implementing the motion plan and the PID controller regulating the kinematics. By difference to the available in the market hexapods the proposed solution is with lower precision but is significantly cheaper and simple to maintain. The repeatability obtained with current implementation is 0,05 mm and 0,001 rad. A specialized DSP based video processing engine is used for both feedback computation and application specific image processing in real-time. To verify the concept some applications has been developed for specific tasks and has been used for specific measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910R (2013) https://doi.org/10.1117/12.2021021
Within this work a single pixel Time-of-Flight (TOF) based range finder is presented. The sensor is fabricated in a 0.35 μm 1P4M CMOS process occupying an area of 45 × 60 μm2 at ~50% fill factor. It takes advantage of the integrated PIN photodiode, representing, to the best knowledge of the author, the first reported TOF device done in this technology with a PIN detector. The measurement results show a standard deviation of 1 cm for a total integration time of 2.2 ms and a received optical power of 10 nW. Furthermore, the maximal measured integration time per single phase step is slightly below 1 ms, being an improvement by the factor of 40 over the previous work using a similar approach. As proven with the measurements, the background light influence on the measured distance can be neglected even if the dc light is by the factor of 600 larger than the modulation signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
J. M. Parkhurst, G. J. Price, P. J. Sharrock, J. Stratford, C. J. Moore
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910T (2013) https://doi.org/10.1117/12.2021533
Patient motion during treatment is well understood as a prime factor limiting radiotherapy success, with the risks most pronounced in modern safety critical therapies promising the greatest benefit. In this paper we describe a real-time visual feedback device designed to help patients to actively manage their body position, pose and motion. In addition to technical device details, we present preliminary trial results showing that its use enables volunteers to successfully manage their respiratory motion. The device enables patients to view their live body surface measurements relative to a prior reference, operating on the concept that co-operative engagement with patients will both improve geometric conformance and remove their perception of isolation, in turn easing stress related motion. The device is driven by a real-time wide field optical sensor system developed at The Christie. Feedback is delivered through three intuitive visualization modes of hierarchically increasing display complexity. The device can be used with any suitable display technology; in the presented study we use both personal video glasses and a standard LCD projector. The performance characteristics of the system were measured, with the frame rate, throughput and latency of the feedback device being 22.4 fps, 47.0 Mbps, 109.8 ms, and 13.7 fps, 86.4 Mbps, 119.1 ms for single and three-channel modes respectively. The pilot study, using ten healthy volunteers over three sessions, shows that the use of visual feedback resulted in both a reduction in the participants’ respiratory amplitude, and a decrease in their overall body motion variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910U (2013) https://doi.org/10.1117/12.2021006
Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910V (2013) https://doi.org/10.1117/12.2020533
With electroencephalography (EEG), a person’s brain activity can be monitored over time and sources of activity localized. With this information, brain regions showing pathological activity, such as epileptic spikes, can be delineated. In cases of severe drug-resistant epilepsy, surgical resection of these brain regions may be the only treatment option. This requires a precise localization of the responsible seizure generators. They can be reconstructed from EEG data when the electrode positions are known. The standard method employs a "digitization pen" and has severe drawbacks: It is time consuming, the result is user-dependent, and the patient has to hold still. We present a novel method which overcomes these drawbacks. It is based on the optical "Flying Triangulation" (FlyTri) sensor which allows a motion-robust acquisition of precise 3D data. To compare the two methods, the electrode positions were determined with each method for a real-sized head model with EEG electrodes and their deviation to the ground-truth data calculated. The standard deviation for the current method was 3.39 mm while it was 0.98 mm for the new method. The influence of these results on the final EEG source localization was investigated by simulating EEG data. The digitization pen result deviates substantially from the true source location and time series. In contrast, the FlyTri result agrees with the original information. Our findings suggest that FlyTri might become a valuable tool in the field of medical brain research, because of its improved precision and contactless handling. Future applications might include co-registration of multimodal information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910X (2013) https://doi.org/10.1117/12.2020000
Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world’s most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910Y (2013) https://doi.org/10.1117/12.2020493
Time-of-Flight (TOF) 3D cameras determine the distance information by means of a propagation delay measurement. The delay value is acquired by correlating the sent and received continuous wave signals in discrete phase delay steps. To reduce the measurement time as well as the resources required for signal processing, the number of phase steps can be decreased. However, such a change results in the arising of a crucial systematic distance dependent distance error. In the present publication we investigate this phase dependent error systematically by means of a fiber based measurement setup. Furthermore, the phase shift is varied with an electrical delay line device rather than by moving an object in front of the camera. This procedure allows investigating the above mentioned phase dependent error isolated from other error sources, as, e.g., the amplitude dependent error. In other publications this error is corrected by means of a look-up table stored in a memory device. In our paper we demonstrate an analytical correction method that dramatically minimizes the demanded memory size. For four phase steps, this approach reduces the error dramatically by 89.4 % to 13.5 mm at a modulation frequency of 12.5 MHz. For 20.0 MHz, a reduction of 86.8 % to 11.5 mm could be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910Z (2013) https://doi.org/10.1117/12.2021002
Correlation based time-of-flight systems suffer from a temperature dependent distance measurement error induced by the illumination source of the system. A change of the temperature of the illumination source, results in the change of the bandwidth of the used light emitters, which are light emitting diodes (LEDs) most of the time. For typical illumination sources this can result in a drift of the measured distance in the range of ~20 cm, especially during the heat up phase. Due to the change of the bandwidth of the LEDs the shape of the output signal changes as well. In this paper we propose a method to correct this temperature dependent error by investigating this change of the shape of the output signal. Our measurements show, that the presented approach is capable of correcting the temperature dependent error in a large range of operation without the need for additional hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Poster Session for Videometrics, Range Imaging, and Applications XII
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879110 (2013) https://doi.org/10.1117/12.2020321
Many large-scale plants are currently being planned or constructed worldwide. Clients require contractors to minimize construction costs and work periods. We are striving to streamline construction work and thus reduce construction costs by focusing on simplifying installation work, shortening installation periods, standardizing all on-site work, and improving quality and safety. When pipes are installed at large-scale plant construction sites, pipes for adjustment called final spools are sometimes inserted between facilities that have been installed and piping that has been fastened. They are delivered to sites in a state that allows for on-site processing. After delivery, on-site matching and adjustment of the amount of processing based on the result of the on-site matching are repeated, and then the final spools are fitted into the spaces between facilities and piping. We have researched and developed a virtual fitting system to streamline fitting work. This paper describes the details of this system and the results of its application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879112 (2013) https://doi.org/10.1117/12.2020644
Total reconstruction of a historical object is a complicated process consisting of several partial steps. One of these steps is acquiring high-quality data for preparation of the project documentation. If these data are not available from the previous periods, it is necessary to proceed to a detailed measurement of the object and to create a required drawing documentation. New measurement of the object brings besides its costs also several advantages as complex content and form of drawings exactly according to the requirements together with their high accuracy. The paper describes measurement of the Baroque church by the laser scanning method extended by the terrestrial and air photogrammetry. It deals with processing the measured data and creating the final outputs, which is a 2D drawing documentation, orthophotos and a 3D model. Attention is focused on their problematic parts like interconnection of the measurement data acquired by various technologies, creation of orthophotos and creation of the detailed combined 3D model of the church exterior. Results of this work were used for preparation of the planned reconstruction of the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879113 (2013) https://doi.org/10.1117/12.2020682
Unmanned aerial mapping is becoming more and more popular in the last years, mostly because of advances in 3D reconstruction from images and its price affordability. The results of 3D reconstruction from images coming close to results of laser scanning in the resolution and accuracy point of view in some cases. However, mobile laser scanning still have advantages in reliability and easiness of measured data processing. That’s why we have chosen an airship as a carrier capable to carry laser scanning unit. Most of the laser scanner used in mobile mapping works in profiler (2D, plane) mode. We decided to modify laser scanner Sick LD-LRS1000 for scanning in conical shape mode, because of its favorable properties. The realization of the modification is described in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879114 (2013) https://doi.org/10.1117/12.2020974
In this paper we investigate the determination of camera relative orientation in videos from time of flight (ToF) range imaging camera. The task of estimating the relative orientation is realized by fusion of range flow and optical flow constraints, which integrates the range and the intensity channels in a single framework. We demonstrate our approach on videos from a ToF camera involving camera translation and rotational motion and compare it with the ground truth data. Furthermore we distinguish camera motion from an independently moving object using a robust adjustment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879116 (2013) https://doi.org/10.1117/12.2021012
The accurate calibration for a 3D profile measurement system based on structured light projection is important to the precision measurement system; however, system calibration is always complicated and time-consuming. An improved fast method is proposed to calibrate the measurement system. First, LCD monitor, as a calibration plate, displays chessboard pattern designed by computer programming, and camera captures 1 image. Then LCD monitor displays white pattern, projector projects horizontal and vertical color-encoded fringes to the LCD monitor, and camera collects 2 images respectively. A Phase-shifting algorithm is used to establish a highly accurate correspondent relationship between camera pixels and projector pixels, and projector images are generated. Next, move the LCD monitor to other 8 places, get camera and projector images which are set for camera and projector calibration respectively using Zhang’s calibration method. Compared with ordinary techniques which use expensive equipments such as two or three orthogonal plates, LCD monitor is easy-to-use and flexible, and experiments show that calibration accuracy is improved by 5 times. In comparison with traditional projector calibration method, this method decreases the number of captured images from 8 to 2 in each place and increases the processing speed. Combining camera calibration and projector calibration, the complex calculation process of integrating traditional camera calibration and projector calibration can be simplified. Experiments have been performed based on the proposed technique and good results have been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879117 (2013) https://doi.org/10.1117/12.2018913
In this work a miniature photometric stereo system is presented, targeting the three-dimensional structural reconstruction of various fabric types. This is a supportive module to a robot system, attempting to solve the well known “laundry problem”. The miniature device has been designed for mounting onto the robot gripper. It is composed of a low-cost off-the-shelf camera, operating in macro mode, and eight light emitting diodes. The synchronization between image acquisition and lighting direction is controlled by an Arduino Nano board and software triggering. The ambient light has been addressed by a cylindrical enclosure. The direction of illumination is recovered by locating the reflection or the brightest point on a mirror sphere, while a flatfielding process compensates for the non-uniform illumination. For the evaluation of this prototype, the classical photometric stereo methodology has been used. The preliminary results on a large number of textiles are very promising for the successful integration of the miniature module to the robot system. The required interaction with the robot is implemented through the estimation of the Brenner’s focus measure. This metric successfully assesses the focus quality with reduced time requirements in comparison to other well accepted focus metrics. Besides the targeting application, the small size of the developed system makes it a very promising candidate for applications with space restrictions, like the quality control in industrial production lines or object recognition based on structural information and in applications where easiness in operation and light-weight are required, like those in the Biomedical field, and especially in dermatology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bransilav Holländer, Svorad Štolc, Reinhold Huber-Mörk
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 879118 (2013) https://doi.org/10.1117/12.2019464
We demonstrate the design, setup, and results for a line-scan stereo image acquisition system using a single area- scan sensor, single lens and two planar mirrors attached to the acquisition device. The acquired object is moving relatively to the acquisition device and is observed under three different angles at the same time. Depending on the specific configuration it is possible to observe the object under a straight view (i.e., looking along the optical axis) and two skewed views. The relative motion between an object and the acquisition device automatically fulfills the epipolar constraint in stereo vision. The choice of lines to be extracted from the CMOS sensor depends on various factors such as the number, position and size of the mirrors, the optical and sensor configuration, or other application-specific parameters like desired depth resolution. The acquisition setup presented in this paper is suitable for the inspection of a printed matter, small parts or security features such as optical variable devices and holograms. The image processing pipeline applied to the extracted sensor lines is explained in detail. The effective depth resolution achieved by the presented system, assembled from only off-the-shelf components, is approximately equal to the spatial resolution and can be smoothly controlled by changing positions and angles of the mirrors. Actual performance of the device is demonstrated on a 3D-printed ground-truth object as well as two real-world examples: (i) the EUR-100 banknote - a high-quality printed matter and (ii) the hologram at the EUR-50 banknote { an optical variable device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911B (2013) https://doi.org/10.1117/12.2020517
Reconstruction of specular objects from camera images is a notoriously difficult task. We propose a novel approach to recover specular surfaces based on probabilistic voxel carving model. The energy functional formulation of the problem might eventually enable the fusion of all sorts of available information, such as the a priori shape hypotheses, knowledge of motion parameters or the reflected environment etc. The model and the specular energy components are discussed in detail. As the first application, we demonstrate the fusion of multiple regularization-free deflectometric datasets recorded from different camera positions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911C (2013) https://doi.org/10.1117/12.2018999
The inspection of the surface quality of optical components is an essential characterization method for high power laser applications. We report about two different mapping methods based on the measurement of Total Scattering (TS) and phase contrast microscopy. The mappings are used for the determination of the defect density distribution of optical flat surfaces. The mathematical procedure relating data points to a defect area and to the form of objects will be illustrated in details. The involved differential operators and the optimized sub routines adapted to a large number of defects will be underlined. For the decision about the form of the objects, a parameter set including the “fill factor”, “edge ratio” and the “polar distance” will be discussed in respect to their versatility range for the basic forms. The calculated distribution will be expressed in terms of affine probability compared to the basic forms. The extracted size and form distribution function of the defects will be presented for selected high reflective and anti-reflective coating samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911D (2013) https://doi.org/10.1117/12.2020568
A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by “cellular neural network” (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera – computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz – depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911E (2013) https://doi.org/10.1117/12.2021660
In optical inspection systems like automated bulk sorters, hyperspectral images in the near-infrared range are used more and more for identification and classification of materials. However, the possible applications are limited due to the coarse spatial resolution and low frame rate. By adding an additional multispectral image with higher spatial resolution, the missing spatial information can be acquired. In this paper, a method is proposed to fuse the hyperspectral and multispectral images by jointly unmixing the image signals. To this end, the linear mixing model, which is well-known from remote sensing applications, is extended to describe the spatial mixing of signals originating from different locations. Different spectral unmixing algorithms can be used to solve the problem. The benefit of the additional sensor and the properties of the unmixing process are presented and evaluated, as well as the quality of unmixing results obtained with different algorithms. With the proposed extended mixing model, an improved result can be achieved, as shown with different examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thomas Stephan, Peter Frühberger, Stefan Werling, Michael Heizmann
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911F (2013) https://doi.org/10.1117/12.2021990
The inspection of offshore parks, dam walls and other infrastructure under water is expensive and time consuming, because such constructions must be inspected manually by divers. Underwater buildings have to be examined visually to find small cracks, spallings or other deficiencies. Automation of underwater inspection depends on established water-proved imaging systems. Most underwater imaging systems are based on acoustic sensors (sonar). The disadvantage of such an acoustic system is the loss of the complete visual impression. All information embedded in texture and surface reflectance gets lost. Therefore acoustic sensors are mostly insufficient for these kind of visual inspection tasks. Imaging systems based on optical sensors feature an enormous potential for underwater applications. The bandwidth from visual imaging systems reach from inspection of underwater buildings via marine biological applications through to exploration of the seafloor. The reason for the lack of established optical systems for underwater inspection tasks lies in technical difficulties of underwater image acquisition and processing. Lightening, highly degraded images make a computational postprocessing absolutely essential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911G (2013) https://doi.org/10.1117/12.2020744
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a
non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through
large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the
intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing
steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the
overall concrete structure.
For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement
sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points
to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and
tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The
methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm
running times within practical bounds for industrial application.
By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and
template matching a robust feature identification method has been developed. These algorithms provide an autonomous
means of verifying tolerances in geometrical features. The key method of identifying the features is through a
combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several
procedural programming layers of different search and filtering techniques, very specific geometrical features can be
identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real
time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911H (2013) https://doi.org/10.1117/12.2020268
The computational prediction of the effective macroscopic material behavior of fiber reinforced composites is a goal of research to exploit the potential of these materials. Besides the mechanical characteristics of the material components, an extensive knowledge of the mechanical interaction between these components is necessary in order to set-up suitable models of the local material structure. For example, an experimental investigation of the micromechanical damage behavior of simplified composite specimens can help to understand the mechanisms, which causes matrix and interface damage in the vicinity of a fiber fracture. To realize an appropriate experimental setup, a novel semi-automatic measurement system based on the analysis of digital images using photoelasticity and image correlation was developed. Applied to specimens with a birefringent matrix material, it is able to provide global and local information of the damage evolution and the stress and strain state at the same time. The image acquisition is accomplished using a long distance microscopic optic with an effective resolution of two micrometer per pixel. While the system is moved along the domain of interest of the specimen, the acquired images are assembled online and used to interpret optically extracted information in combination with global force-displacement curves provided by the load frame. The illumination of the specimen with circularly polarized light and the projection of the transmitted light through different configurations of polarizer and quarterwave-plates enables the synchronous capturing of four images at the quadrants of a four megapixel image sensor. The fifth image is decoupled from the same optical path and is projected to a second camera chip, to get a non-polarized image of the same scene at the same time. The benefit of this optical setup is the opportunity to extract a wide range of information locally, without influence on the progress of the experiment. The four images are used to obtain information on the stress distribution based on photoelasticity, while the fifth image delivers the local strain as outcome of an image correlation algorithm and enables the observation and documentation of the visible damage phenomena. The acquisition of five different images at a time allows for the application to materials with time-dependent mechanical behavior which is an important added value of the developed measurement optics. The experimental setup is applied to the so-called single fiber fragmentation test, which defines a common test procedure to study the damage phenomena of single long-fiber reinforced specimens in transparent matrix material. When a tension load is applied to the specimen at low strain rate, damage of the fiber arises without a complete failure of the matrix material. As a result of the local failure of the fiber, a load transfer to the surrounding matrix material and the appearance of a characteristic stress distribution as well as evolving matrix and interface cracks can be observed. Using the described measurement system, it is possible to estimate the stress and strain distribution of the matrix material in the vicinity of the fractured fiber. In combination with the documentation and classification of the damage phenomena this enables the interpretation of the stress redistribution process inside the composite. This knowledge can be used to analyze the correlation between micromechanical phenomena and the effective macroscopic material behavior as well as to identify parameters of constitutive models for interface failure. The article demonstrates the potential of the measurement system and presents the results of its application to the single fiber fragmentation test. To point out the concluded facts, the results of differently manipulated specimen of epoxy matrix material with an embedded glass fiber are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911I (2013) https://doi.org/10.1117/12.2020543
We describe the automated application of an area based registration method to the surface inspection of steel industry products as a tool to solve an intermediate mosaicing problem. The main problem of area based methods is that there is high probability that the results of a matching process will be incorrect if a region of interest without any relevant detail is used. The selection of a region of interest with relevant content continues to be a problem nowadays. We propose a method to select a salient area when using a zero mean normalised cross correlation metric and a block as a region of interest. The selection of the size and the position of the block is focused on ensuring a smooth unimodal similarity surface around the maximum similitude point. Experiments show a correlation between the surface kurtosis of the block autocovariance and the same coefficient measured over the correlation surface around the maximum similitude point for the three different steel products analysed. We check that the maximum correlation value is reached abruptly, in a small range of pixels around the maximum similitude point, in correlation surfaces obtained from blocks containing non-relevant information. On the other hand, salient blocks usually lead to unimodal smooth similarity surfaces with small sensitivity to noise in contrast with the ones obtained from non-remarkable blocks. Also, the method proposed allows the application of fast search algorithms based on the unimodality of the correlation surface, obtaining high computational time reduction in comparison with full search strategies using fast normalised cross correlation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911J (2013) https://doi.org/10.1117/12.2020972
The recycling of plastic products becomes increasingly attractive not only from an environmental point of view, but also economically. For recycled (engineering) plastic products with the highest possible quality, plastic sorting technologies must provide clean and virtually mono-fractional compositions from a mixture of many different types of (shredded) plastics. In order to put this high quality sorting into practice, the labeling of virgin plastics with specific fluorescent markers at very low concentrations (ppm level or less) during their manufacturing process is proposed. The emitted fluorescence spectra represent optical fingerprints" - each being unique for a particular plastic - which we use for plastic identification and classification purposes. In this study we quantify the classification performance using our prototype measurement system and 15 different plastic types when various influence factors most relevant in practice cause disturbances of the fluorescence spectra emitted from the labeled plastics. The results of these investigations help optimize the development and incorporation of appropriate fluorescent markers as well as the classification algorithms and overall measurement system in order to achieve the lowest possible classification error rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911K (2013) https://doi.org/10.1117/12.2021604
Increasing demands for product quality and outsourcing of production in the automobile industry lead to in creasingly tight tolerances for the components. In the area of metal-mechanics these are largely dimensional and require frequently uncertainties in the micron region. For optical instruments this means microscopical resolu tion. Dimensional measurement with uncertainties of some microns is nothing new, state of the art equipment in fact goes far below. The task becomes difficult if the measurements have to be carried out in an industrial production environment - and deep inside a bore hole. This paper describes the development of an automatic measurement system for internal dimensions of brake master cylinders, specifically the development of endoscopes, illuminations for edge detection, and integration with other sensors, actuators and controllers. The most demanding part was the endoscope development, because, surprisingly, no commercial product for microscopic view and precision measurements was found on the market. As the market for such measurement machines is very small, and as the requirements were different for each endoscope, the budget allowed only the development of prototypes, using readily available optical components. Borders between faces with different orientation of metallic structures can be difficult do detect. A satisfactory metrological performance can be achieved only with carefully shaped illumination, even if the source is a simple LED (light emitting diode). The automation was responsible for the largest part of the overall cost, coming from the desire for a high throughput of the measurement machine, even when operated by not highly qualified personnel. With the safety requirements satisfied, such a device ends up as a pretty complex equipment. Nevertheless, these aspects will be mentioned only for completeness, because standard components and methods were applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911L (2013) https://doi.org/10.1117/12.2020149
In this paper we propose a method for classification of moving objects of “human” and “car” types in computer vision
systems using statistical hypotheses and integration of the results using two different decision rules. FAR-FRR graphs
for all criteria and the decision rule are plotted. Confusion matrix for both ways of integration is presented. The example
of the method application to the public video databases is provided. Ways of accuracy improvement are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911M (2013) https://doi.org/10.1117/12.2020337
In our previous study we have shown that identification of bacteria species with the use of Fresnel diffraction patterns is possible with high accuracy and at low cost. Fresnel diffraction patterns were recorded with the optical system with converging spherical wave illumination. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Features used for building classification models and thus for identification were simply mean value and standard deviation calculated of pixel intensities within regions of interest called rings. This work presents new, interpretable features denoting morphological and textural properties of the Fresnel diffraction patterns and their verification with the use of the statistical analysis workflow specially developed for bacteria species identification. As data set of bacteria species diffraction patterns it is very important to find features that differentiate species in the best manner. This task includes two steps. The first is finding and extracting new, interpretable features that can potentially be better for bacteria species differentiation than the ones used before. While the second one is deciding which of them are the best for identification purposes. The new features are calculated basing on normalized diffraction patterns and central statistical moments. For the verification the analysis workflow based on ANOVA for feature selection, LDA, QDA and SVM models for classification and identification and CV, sensitivity and specificity for performance assessment of the identification process, are applied. Additionally, the Fisher divergence method also known as signal to noise ratio (SNR) for feature selection was exploited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911N (2013) https://doi.org/10.1117/12.2020460
Due to the depletion of solid minerals ore reserves and the involvement in the production of the poor and refractory ores a process of continuous appreciation of minerals is going. In present time at the market of enrichment equipment are well represented optical sorters of various firms. All these sorters are essentially different from each other by parameters of productivity, classes of particles sizes for processed raw, nuances of decision algorithm, as well as by color model (RGB, YUV, HSB, etc.) chosen to describe the color of separating mineral samples. At the same time there is no dressability estimation method for mineral raw materials without direct semi-industrial test on the existing type of optical sorter, as well as there is no equipment realizing mentioned dressability estimation method. It should also be note the lack of criteria for choosing of one or another manufacturer (or type) of optical sorter. A direct consequence of this situation is the "opacity" of the color sorting method and the rejection of its potential customers. The proposed solution of mentioned problems is to develop the dressability estimation method, and to create an optical-electronic system for express analysis of mineral raw materials dressability by color sorting method. This paper has the description of structure organization and operating principles of experimental model optical-electronic system for express analysis of mineral raw material. Also in this work are represented comparison results of the proposed optical-electronic system and the real color sorter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911O (2013) https://doi.org/10.1117/12.2022133
In this paper we assess the impact of different error sources on the deflectometric measurement. We provide an overview of previous work in this field and fill the gaps to provide a unified measurement model. The focus is on the parameters of a deflectometric setup with the objective to give practice-oriented guidelines for optimizing the deflectometric data acquisition. We will differentiate between systematic error sources which can be anticipated and compensated for and errors which are intrinsic to the deflectometric measurement method itself. In the later case possible trade-offs between parameters are highlighted to enable the optimization of a setup to a specific application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911P (2013) https://doi.org/10.1117/12.2020374
In this paper, an accurate and efficient method for measuring the refractive index of a transparent plate is developed. The refractive index is evaluated by using Fourier Transform Method (FTM), from a fringe pattern generated by digital speckle photography. The validity and accuracy of the method were confirmed with a standard reference material. Furthermore, the method is insensitive to environmental perturbations, and simple to implement, compared to the conventional index measurement methods providing similar accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911Q (2013) https://doi.org/10.1117/12.2020431
A ball-based intermediary target technique is presented to position moving machine vision measurement system and
to realize data registration under different positions. Large-sized work-piece measurement based on machine vision faces
several problems: limited viewing angle, range and accuracy of measurement inversely proportional. To measure the
whole work-piece conveniently and precisely, the idea that using balls as registration target is proposed in this paper.
Only a single image of the ball target is required from each camera then the vision system is fully calibrated (intrinsic
and extrinsic camera parameters). When the vision system has to be moved to measure the whole work-piece, one
snapshot of the ball target in the common view can position the system. Then data registration can be fulfilled. To
achieve more accurate position of ball’s center, an error correction model is established.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87911R (2013) https://doi.org/10.1117/12.2020528
To track walking persons inside a surveillance area we use LIDAR (LIght Detection And Ranging) sensors with a
small number N of spatially stationary LIDAR beams in order to keep the sensor costs to a minimum. To achieve
high target detectability and tracking performance, the coverage of the surveillance area by the N LIDAR beams
must be large, which is why the beamwidth is to be set to a practically feasible maximum. As a result, the
lateral localization error inside these wide LIDAR beams is high while the area of surveillance can still not be
entirely covered by LIDAR beams. Thus, the accurate tracking of persons walking inside the area of surveillance
is challenging. In the classical tracking approach, the axial position of a target inside a LIDAR beam is obtained
from time-of-
ight measurements. However, the lateral deviation of the target position from the optical beam
axis remains unknown. In this paper, a novel approach to reduce the lateral localization error is proposed and
investigated. From consecutively measured (axial) distances to the target while it moves through a LIDAR beam
the target velocity vector is estimated and used as observation for a Kalman-based tracking algorithm. The
localization and tracking performances of the novel approach are determined and compared with those of the
classical approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.