Existing style transfer methods need more texture structure of style images with aesthetic guidance, resulting in the loss of a large amount of texture details, which affects the visual effect. We propose a style transfer model based on multi-adaptive generative adversarial networks (MA-GANs) to address this. Specifically, the aesthetic ability learned by the discriminator is used for feature extraction, and the obtained more generalized features are passed into the multi-attention aesthetic module, which includes a collaborative self-attention (CSA) module and a self-attention normalization (SAN) module. The CSA calculates the correlation between the entangled style features and aesthetic features, capturing texture details and style geometric structures. The SAN balances the content semantic structure with the style geometric structure, harmoniously integrating the style pattern into the content image, thus achieving a more practical style transfer. Extensive qualitative and quantitative experimental studies demonstrate the superiority of MA-GAN in visual quality, enabling the synthesis of art images with smooth brushstrokes and rich colors. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Color
Semantics
Feature extraction
Image quality
Education and training
Visualization
Data modeling