The robust image feature point is a critical component of image matching. In order to detect feature points that are robust to illumination changes and viewpoint changes, an improved self-supervised learning framework for feature point detector is proposed. Firstly, feature point detector is trained in simple synthetic datasets. Then, the labeled dataset is generated by applying the Homographic Adaptation to automatically label the unlabeled area image. Finally, the full convolution network is trained with the labeled dataset. In this paper, the convolutional neural network in the selfsupervised learning framework is improved, mainly by increasing the number of layers of the neural network, from the original 8 layers to 11 layers. Experiments on the HPatches dataset show that the improved self-supervised feature point detector has achieved good results
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.