6D object pose estimation is essential for many applications with high demands in accuracy and speed. Compared with end-to-end approaches, pixel-wise voting network (PVNet), a vector-field based two-stage approach, has shown the superiority in accuracy but the inferiority in speed because of the time-consuming RANSAC-based voting strategy. To resolve this problem, we propose an efficient deep architecture that consists of an enhanced vector-field prediction network (VPNet) and a keypoint localization network (KLNet) and call it VP-KLNet. Specifically, the KLNet replaces PVNet’s time-consuming voting scheme by directly regressing 2D keypoints from the vector fields, which significantly improves the inference speed. Furthermore, to capture multiscale contextual information, we embed the pyramid pooling module between the encoder and decoder in VPNet to obtain more accurate object segmentation and vector-field prediction, with negligible speed loss. Experiments demonstrate that our method has more than 50% improved in the running speed to the baseline method PVNet and achieves comparable accuracy with the state-of-the-art methods on the LINEMOD and occlusion LINEMOD datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.