The images generated by the existing makeup transfer methods have some problems, such as large loss of facial structure and inconsistent color distribution with reference images. In this paper, a makeup transfer method is proposed, which can keep facial structure information unchanged. Firstly, this paper builds an efficient generator model by using the characteristics of U-Net which combines up-sampling and down-sampling information to extract image features and SE-Net which emphasizes useful features and suppresses useless features. At the same time, a loss function is designed to constrain the facial color distribution of the generated image so that the generated image is as consistent as possible with the color distribution of the reference image. Experiments show that the makeup transfer method in this paper not only can capture the color distribution of the reference image's face, but also better preserves the facial features of target image with an average SSIM of 0.8740.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.