PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Kevin G. Harding,1 Song Zhang,2 Jae-Sang Hyun,3 Beiwen Li4
1Optical Metrology Solutions (United States) 2Purdue Univ. (United States) 3Orbbec 3D Technology International, Inc. (United States) 4Iowa State Univ. of Science and Technology (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12098, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method that can accurately determine the mapping between the phase and three-dimensional (3D) coordinates for digital fringe projection systems. This method first extracts the rotation and translation of each calibration target pose, calculates 3D coordinates for each pixel, and then establishes pixel-wise relationship between each coordinate and phase. Experimental results demonstrate that the proposed method achieves higher calibration accuracy comparing with the traditional structured light system calibration method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase-shifting methods have been extensively employed in high-resolution and high-speed absolute three-dimensional (3D) measurements. In the process of 3D reconstruction, one of the important tasks is to recover the absolute phase pixel by pixel. In this paper, we propose an absolute 3D shape measurement method that combines the digital image correlation (DIC) algorithm with the phase-shifting method on a one-camera and one-projector structure light system. Comparing to the conventional multi-wavelength unwrapping method, the proposed method only needs one random binary pattern for phase unwrapping. Experimental results demonstrated the proposed method can successfully measure complex scenes with high quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The requirements in optical dimensional coordinate metrology are often not only to fulfill the measurement task but also to define the measurement strategy. One application of relevance is the automatic quality assurance of punching and stamping tools, which is often directly integrated into the manufacturing process. The measurement of surface finishes and micro-geometries with tolerances down to the sub-μm range has a decisive influence on the quality, reliability, and durability of such high-precision components. The challenge of the measuring task consists of measuring complex micro geometries on free-form surfaces on the top as well as the vertical reference surfaces on the side. Existing tactile methods often do not allow high-resolution measurements of such free-form surfaces or require long measuring times. Existing optical measurement systems typically have a limited performance with respect to measurable slope angles. For an automatic measurement process, measurement planning as e.g., the number of measurements, their positions and probing directions are essential for high-accurate and repeatable measurements. The proposed measurement solution based on an optical micro-coordinate measurement machine (μCMM) using advanced focus variation offers advantages in terms of the complete application including the measurement strategy. Focus variation and vertical focus probing enables the measurement of the free-form top-surface and the measurement of the vertical surfaces by optical lateral probing. The μCMM metrology software provides algorithms that support the user in technology-based optimal measurement planning on CAD data by e.g., suggestions in terms of measurement method, probing direction or offline collision detection using a digital twin of the μCMM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital fringe projection (DFP) methods are commonly used to obtain high-accuracy shape measurements. However, many measured objects may have high-contrast texture caused by edges of black- and white-colored sections of the object. In these high-contrast areas, there has consistently been a phase artifact, which in turn creates measurement error, sometimes referred to as “discontinuity-induced measurement artefacts” (DMA). Our study indicated that this error is generally shaped like a Gaussian curve. Based on this finding, we developed a method for removing this error via Gaussian curve fitting on the affected regions. These regions can be found by locating large spikes in the image intensity gradient, which directly correspond to the edge of the Gaussian artifact. We propose to use this error removal method in two ways: to remove errors on a checkerboard calibration target in order to increase calibration accuracy; and to directly remove errors in high-contrast areas to decrease shape measurement error. Experimental results demonstrate that the proposed method can successfully work for decreasing calibration error for a checkerboard calibration target, and shape measurement error can also be significantly decreased as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser triangulation gages have been in wide use in industry for over 40 years but have changed very little in the mechanism of data extraction. The common method of measurement involves projecting a laser spot onto a target, imaging that spot onto a detector array or position sensing detector then using triangulation based on the center of the spot image to determine the range. Over recent years, many new means to detect range for 3D measurements have included phase stepping using square waves, using coded patterns, and using phase shifting of sine waves. This paper will present a comparison of various means of using and analyzing patterns generated by using the laser spot on the target so as to provide extended capabilities to the point laser gage. The mechanisms explored include shadows of square and sine waves from gratings as well as interference patterns using interferometry. The results provide insights for both 1D and 3D measurement applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method that does not reqire additional images for 3D video imaging with phase unwrapping technique using geometric constrain based on graident field. Specifically, we creat an artificial absolute phase map Φmin at given depth z = zmin from the former frame. We optimize the zmin by validate the continuity of computed unwrapped phase with gradient field and deliver the true minimum z value zmin to next frame. The first frame is reconstructed by a static method. Experiments demonstrate that only three phase-shifted fringe patterns are required to measure moving objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Seeking and tracking high thermal signature targets requires extreme precision to protect assets and prevent misidentification of threats. In these situations, the midwave infrared (MWIR) region of the electromagnetic spectrum is the ideal wavelength range for optical detection. Systems used in these scenarios have stringent transmitted wavefront and performance requirements, often needing optical alignment tools that can be adapted to the wavelength specifications and test configurations unique to the optical system being built and tested. Discussed here is a high performance MWIR Twyman-Green-style interferometer with a dual port configuration. This system was designed to allow for multiple simultaneous test setups – including expanded beams – while maintaining high-accuracy wavefront measurements in both well-controlled and turbulent environments. This paper presents design methodology and performance of the interferometer with special consideration for cost, usability, and maintaining test configurations and functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
White Light Scanning Interferential Microscopy (WLSI) is a widely used technique for determining the 3D topography of surfaces with nanometer resolution. However, despite obtaining the topography with adequate resolution, the precise information of the object’s reflectance is lost due to a degrading of the microscopy images with interference fringes. These fringes make it challenging to obtain an extended focus image (EFI) to inspect details of the entire surface, as is done in standard microscopy. The typical procedure to estimate the reflected intensity of the object is to perform an averaging of the depth interference intensity signal. However, for many samples of the intensity signal, the effect of blurring becomes noticeable. Alternatively, in the case of few samples, remnant artifacts of the interference fringe patterns remain. In this work, we determine an adequate axial range that represents an optimal window for averaging and estimating the intensity of an EFI. A series of WLSI interference images were simulated, and EFI images were calculated by averaging over axial lengths normalized relative to the depth of field. Each EFI was compared with the reference image using the signal-to-noise ratio (SNR) and the universal quality index (UQI) metrics with the highest values obtained of 44.332 and 0.9997, respectively, for an axial range of 0.28DOF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In structured-light systems, the lens distortions of the camera and the projector reduce the measurement accuracy when calibrated as a standard stereo-vision system. The conventional compensation via distortion coefficients reduces the error, but still leaves a significant residual. Recently, we proposed a hybrid calibration procedure that leverages the standard calibration approach to improve measurement accuracy. This hybrid procedure consisted of building a pixel-wise phase-to-coordinate mapping based on adjusted 3D data obtained from the standard stereo-vision method. Here, we show experimentally that the measurement accuracy can be significantly improved, even using the linear pinhole model and linear mapping functions. We then move to consider the nonlinear model to improve the measurement accuracy further. Encouraging results show that this new calibration method increases the measurement accuracy without requiring elaborate calibration procedures or sophisticated ancillary equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Industrial robots have been an essential part of production facilities for many years. They allow fast and precise positioning of even the largest loads with very high repeatability. However, there are still many processes which are superiorly or more economically executed by humans. If a component requires several work steps, some of which are better suited to a robot and some to a human worker, cooperation between humans and robots would be beneficial. Due to the enormous power and speed of industrial robots, this poses a considerable risk to the worker. Therefore, tasks to be performed by humans and robots are usually completely decoupled in terms of space or time. We suggest an approach, which allows a human worker to interact safely with a fast industrial robot. We achieve this by constantly monitoring the position of both robot and human and adjusting the robot’s velocity according to its proximity to the worker. We present an interaction booth, which can be entered by a robot arm from the back and a worker from the front such that they can both access the machinery within. A multicamera sensor, which is based on the shape-from-silhouette principle, constantly observes the booth to monitor its occupancy. We demonstrate that within 50 ms, our sensor can (1) detect a change in occupancy in the booth, (2) classify sub-volumes as “robot”, “human”, or “other object”, (3) calculate the distance between human and robot, and (4) output this information to the robot controller. The working speed of the robot is then adjusted according to its distance to the worker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To ensure the safe operation of aircraft, regular endoscopy of the engines is mandatory. Since the blade stages are particularly susceptible to defects, they must be inspected especially frequently. In the process, a worker must inspect each blade individually. All findings must be carefully documented and assigned to the respective blade. Since there are no individual markings to identify the blades, the operator must count all blades as they pass through the endoscope image. Although electric rotary devices with automatic blade counting are available for some engines, manual counting is often necessary. Simultaneously inspecting and counting blades is tedious and error-prone. We present a novel algorithm for automatic blade counting during jet engine inspection in this paper. The algorithm’s central part is a Pearson correlation of individual video frames as the blades pass before the endoscopes during turning. Adaptive thresholding of the correlation function is used to count the blades. Rotation direction and speed are determined using the Farneback method of optical flow. By using correlation instead of classical image features, the algorithm is highly robust to metallic reflections and smooth blade surfaces without significant image features. In addition, the algorithm is robust to different rotation speeds and directions. Compared to existing approaches, the algorithm is robust and universally applicable for counting engine blades on almost any engine without the need for customization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo vision is used in many application areas, such as robot-assisted manufacturing processes. Recently, many different efficient stereo matching algorithms based on deep learning have been developed to solve the limitations of traditional correspondence point analysis, among others. The challenges include texture-poor objects or non-cooperative objects. One of these end-to-end learning algorithms is the Adaptive Aggregation Network (AANet/AANet+), which is divided into five steps: feature extraction, cost volume construction, cost aggregation, disparity computation and disparity refinement. By combining different components, it is easy to create an individual stereo matching model. Our goal is to develop efficient learning methods for robot-assisted manufacturing processes for cross-domain data streams. The aim is to improve recognition tasks and process optimisation. To achieve this, we have investigated the AANet+ in terms of usability and efficiency on our own test-dataset with different measurement setups (passive stereo system). Input of the AANet+ are rectified stereo pairs of the test-dataset and a pre-trained model. Instead of generating our own training dataset, we used two pre-trained models based on the KITTI-2015 and SceneFlow datasets. Our research has shown that the pretrained model based on the Scene Flow dataset predicts disparities with better object delimination. Due to the Out-of-Distribution inputs, only reliable disparity predictions of the AANet are possible for test data sets with parallel measurement setup. We compared the results with two traditional stereo matching algorithms (SemiGlobal block matching and DAISY). Compared to the traditionally computed disparity maps, the AANet+ is able to robustly detect texture-poor objects and optically non-cooperative objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next-Generation Spectroscopic Technologies and Computational Imaging I
Optically Super-resolved InfraRed Imaging micro-Spectroscopy (OSIRIS) is a novel technique to break the tension between spectroscopy (wavelength) and microscopy (spatial resolution) inherent in the diffraction limit. In OSIRIS, modulated long wavelength “pump” light is directed onto the sample, while a short wavelength “probe” beam senses the resultant modulation in local temperature. Spectra are collected by varying the wavelength of the modulated light. We describe a method of spectral de-mixing based on a Bayesian approach to identify the statistically distinct chemical fingerprints (spectra) in the hyperspectral image. This result also reveals the amount of each material at each point in space, which can then be used to digitally stain the image to present an easily interpretable image to the end user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid Fourier transform infrared (FTIR) / quantum cascade laser (QCL) spectrometer is introduced for the analysis of gas-phase chemical kinetics, including the study of alkyl halide photolysis reactions. The FTIR provides broadband spectral survey information and the QCL laser system provides improved detection limits and acquisition speeds, albeit over limited wavelength domains. A kinetic model for the photolysis of methyl iodide is introduced which suggests that both the steady state products, such as methanol, and transient intermediates may be monitored using the hybrid setup. Preliminary results use an external cavity QCL to rapidly measure the spectrum of methanol from 2200-1960 cm-1 in ~2 seconds, which is sufficiently fast to capture the chemical dynamics predicted by the model to occur during the first several seconds of photolysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Next-Generation Spectroscopic Technologies and Computational Imaging II
To increase the throughput of image-based wafer quality inspection tools, we propose to use computational imaging methods to address two main bottlenecks: the mechanical alignment of the wafer with the imaging plane and pixel size of the imager. The former requires significant time but is crucial for a reliable quality check, and the latter introduces a trade off between the wafer scanning speed and the minimum detectable defect size. We demonstrate application of our recently developed SANDR algorithm for obtaining a wafer image with sub-pixel resolution from a series of not-perfectly aligned low-resolution images. The wafer misalignment creates a varying over the field of view depth-of-focus, which poses a significant obstacle for the state-of-the-art methods, but is successfully processed with SANDR. The method is tested on simulated images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of my study is to provide a relative location, posture change, and spatial features that provide a spatial description for applications that recognize space such as SLAM(Simultaneous Localization And Mapping), robotics, AR(Augmented Reality), VR(Virtual Reality) and etc. For odometry by using a depth camera, corresponded feature ICP (Iterative Closest Point) is employed. However, if a feature occurs at the edge of an object, the distance may change due to angle changes, so the features are initially extracted only on a single plane using PCA(Principal Component Analysis) globally. Additionally, ICP algorithm obtains rotation and translation by extracted features in which the agent moves. This process constitutes a sliding window with N-sets. In the description of space, the plane of the floor and ceiling is first semantically recognized with PCA. It is relatively easy to distinguish by the estimating of pose with the agent's IMU (Inertial Measurement Unit) and the agent's camera tilting. Expanding from tiles recognized as floors to unique vectors, colors, and textures to obtain occupancy (Whether it's an obstacle or not.). The recognition of objects module is inspired by PointNet, harnessing inputs as points and normal vectors, and classifies pre-trained features (box, cylinder, sphere, and cone etc.). The position of the recognized object is reused to correct the drift of the odometry. Stronger recognition is possible because the shape is recognized, not a specific class. The experiment is carried out in a rover which has a battery that can operate for 10-12 hours on a single charge, a depth camera including an IMU collect data and an edge device with Wi-Fi, and transmit the data to a server for continuous training. If the agent could keep training in the same place, semi-supervised learning is possible with several confirmations by the supervisor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel thin-film solar cells based on Copper Indium Gallium Selenide (CIGS) are an alternative to standard crystalline silicon cells. This work tests whether two proposed optical methods: Micro-Raman spectroscopy (RS) and photoluminescence (PL) imaging, can measure quality parameters of CIGS PV plates during their manufacture. The investigation followed three steps. Step 1: semi-finished CIGS cells were deposited on a soda-lime glass carrier and measured with Raman and PL. The test cells consisted of a Molybdenum (Mo) back contact, a CIGS layer (varied in the absorber thickness), and a CdS layer. The measurements were used to train models for predictive quality monitoring. Step 2: the plates were finished by adding an iZnO buffer layer, ZnO:Al (AZO) front electrode and divided into 32 cells by scribing down to the Mo layer and electrically tested. I-V parameters such as the open circuit voltage VOC, shunt resistance Rsh, and EQE were measured. Step 3: the finished cells were again measured using the two proposed methods to estimate the composition, efficiency, and VOC of the thin-film cells. Our results show that the proposed methods can non-destructively predict the absorber composition and cell electrical parameters and can therefore be used to exclude samples with poor cell performance at an early production stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.