The robust image feature point is a critical component of image matching. In order to detect feature points that are robust to illumination changes and viewpoint changes, an improved self-supervised learning framework for feature point detector is proposed. Firstly, feature point detector is trained in simple synthetic datasets. Then, the labeled dataset is generated by applying the Homographic Adaptation to automatically label the unlabeled area image. Finally, the full convolution network is trained with the labeled dataset. In this paper, the convolutional neural network in the selfsupervised learning framework is improved, mainly by increasing the number of layers of the neural network, from the original 8 layers to 11 layers. Experiments on the HPatches dataset show that the improved self-supervised feature point detector has achieved good results
In a structured light-based 3D scanning system, the overall 3D information of to-be-measured objects cannot be retrieved at one time automatically. Currently the 3D registration algorithms can be divided into the auxiliary objects-based method and the feature points-based method. The former requires extra calibration objects or positioning platforms, which limits its application in free-form 3D scanning task. The latter can be conducted automatically, however, most of them tried to recover the motion matrix from extracted 2D features, which has been proved to be inaccurate. This paper proposed an automatic and accurate full-view registration method for 3D scanning system. Instead of using the 3D information of detected feature points to estimate the coarse motion matrix, 3D points reconstructed by the 3D scanning system were utilized. Firstly, robust SIFT features were extracted from each image and corresponding matching point pairs are achieved from two adjacent left images. Secondly, re-project all of the 3D point clouds onto the image plane of each left camera and corresponding 2D image points can be obtained. Filter out correct matching points from all 2D reprojection points under the guidance of the extracted SIFT matching points. Then, the covariance method was adopted to estimate the coarse registration matrix of adjacent positions. This procedure was repeated among every adjacent viewing position of the 3D scanning system. Lastly, fast ICP algorithm was performed to conduct fine registration of multi-view point clouds. Experiments conducted on real data have verified the effectiveness and accuracy of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.