In computer vision field, the style transfer technique is to synthesize the style of one image with the content features of another image to generate a new migrated image, thus creating a unique graphic art image. In this thesis, the image style transfers achieved based on the VGG19 network model and pre-trained Pix2Pix network model separately were investigated by the comparison method, and the COCO dataset was selected as the dataset. The experiments used the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) to evaluate the performance of the style transfer images. It can be found that the PSNR of the migrated image of the VGG19 network model is 12.6776 and the SSIM is 0.2981, while the PSNR of the migrated image of the Pix2Pix network model is 12.9153 and the SSIM is 0.3182. The result indicates that the migrated image generated by the Pix2Pix network model is better than that by the VGG19 with respect to the image quality features.
|