The completion and detail retention of face replacement after face changing are difficult technical problems in face swapping. This paper proposes an efficient and flexible model for face swapping which can realize high-quality face swap and improve the quality of the outputs of the existing networks. The proposed face swapping network is completed by the AEI-Net. After training the AEI-Net (the AEI-Net consists of three subnets: identity encoder, multi-level attribute encoder and ADD generator), the generated image with face change effect can be obtained. In our experiment, we use the training data sets which are CelebA and FFHQ as other face swapping networks to train their model and our model and use the same test data set to evaluate it. After training, it can recover the abnormal area in a self-supervised way. The results show that compared with the state-of-the-arts, our model achieves good performance in terms of the realism of the generated face and the degree of detail reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.