Estimating three-dimensional (3D) human poses from the positions of two-dimensional (2D) joints has shown promising results. However, using 2D joint coordinates as input loses more information than image-based approaches and results in ambiguity. To overcome this problem, we combine bone length and camera parameters with 2D joint coordinates for input. This combination is more discriminative than the 2D joint coordinates in that it can improve the accuracy of the model’s prediction depth and alleviate the ambiguity that comes from projecting 3D coordinates into 2D space. Furthermore, we introduce direction constraints, which can better measure the difference between the ground truth and the output of the proposed model. The experimental results on the Human3.6M show that the method performed better than other state-of-the-art 3D human pose estimation approaches. The code is available at: https://github.com/XTU-PR-LAB/ExtraPose/.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.