The person-related vision tasks face many challenges, such as insufficient or lack of diversity of datasets and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. To address this issue, this paper proposes a human posture transfer method based on improving CycleGAN. Transferring a given person’s posture to the target one, while keep the character's identity unchanged, thereby augment the diversity of datasets. The generator of the network contains a sequence of transfer blocks which have similar structures, each transfer block is utilized to transform different part of body as a local transfer. Therefore, avoiding learning the complex structure of the global manifold and address the large spatial misalignment issues induced by transformations of target pose. The discriminator of the network comprised of two convolutional neural networks, to judge the appearance and shape. Quantitative comparisons with state-of-the-art, the proposed method can generate images with the highest score on the metrics, and get performance boost of Re-ID on account of data augmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.