Accurate recognition of cutting path is crucial for the manufacture of invisible aligners. Current methods for cutting path recognition suffers from low efficiency in the manual point selection and line drawing, or low robustness and high complexity in the automatic recognition. This paper proposes an accurate and rapid algorithm for the automatic recognition of cutting paths for invisible aligners. Based on curvature from the vertices of the mesh, the algorithm extracts the feature region for the cutting path. Initial cutting path is obtained via three-dimensional morphological operations and an improved skeleton extraction algorithm. The final cutting path is achieved by applying a conformal mapping algorithm to unfold initial cutting paths to a two-dimensional plane for smoothing, and then mapping two-dimensional smooth cutting path back to three-dimensional space. Experimental results on 80 digital dental models verify the accuracy and robustness of the proposed method.
Phase unwrapping technology plays an important role in phase measurement profilometry. The unwrapping results directly affect the measurement accuracy. With the development of deep learning theory, it is opening a new direction to phase unwrapping algorithm. In this paper, a new neural network model based on an improved generation adversarial network (iGAN) is proposed for phase unwrapping. Compared with traditional methods, it can effectively suppress the influence of noise such as shadows, and does not need any referenced grating information. In addition, it can realize the phase unwrapping with a single image. Specifically, the algorithm is verified by the three-dimensional reconstruction with structured light based on the simulation data. The results indicate that the proposed method can successfully unwrap the phase via a single image. It also can well suppress the influence of frequency and shadows.
Three-dimensional (3D) point cloud segmentation plays an important role in autonomous navigation systems, such as mobile robots and autonomous cars. However, the segmentation is challenging because of data sparsity, uneven sampling density, irregular format, and lack of color texture. In this paper, we propose a sparse 3D point cloud segmentation method based on 2D image feature extraction with deep learning. Firstly, we jointly calibrate the camera and lidar to get the external parameters (rotation matrix and translation vector). Then, we introduce the Convolutional Neural Network (CNN)-based object detectors to generate 2D object region proposals in the RGB image and classify object. Finally, based on the external parameters of joint calibration, we extract point clouds that can be projected to 2D object region from 16-lines RS-LIDAR-16 scanner, and further fine segmentation in the extracted point cloud according to prior knowledge of the classification features. Experiments demonstrate the effectiveness of the proposed sparse point cloud segmentation method.
Calibration which defines the relationship between the phase and depth data is the important part of the fringe projection
profilometry. In practice, the inherently nonlinear and spatially variable relationship between the absolute phase of the
projected fringe and the object surface depth without using telecentric lens make calibration problematic in the
measurement of small object. In order to obtain this problem, a flexible, simple telecentric three dimensional
measurement system is proposed. Because of the characteristic that the size of object will not change with depth in
telecentric imaging, the absolute phase is linear with the depth and the process of calibration become simpler. The
experiment result indicate that the standard deviation of calibration result at z coordinate is within 5 μm, while that at x
and y coordinate is within 3 μm. Three-dimensional shape reconstruction of a coin value ¥1 and measurement of central
circle points of the calibration target further verify the validity of the proposed calibration method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.