In the past few years, deep learning-based image inpainting has made significant progress. However, many existing methods do not take into account the rationality of the structure and the fineness of the texture, which leads to the scattered structure or excessive smoothness of the repaired image. To solve this problem, we propose a two-stage image inpainting model composed of structure generation network and texture generation network. The structure generation network focuses on the structure and color domain and uses the damaged structure map extracted from the mask image to reasonably fill the mask area to generate a complete structure map. The texture generation network uses the repaired structure map to guide the refinement process. We train the two-stage network on the public datasets Places2, CelebA, and Paris StreetView, and the experimental results show the superiority of our method over the previous methods. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Volume rendering
Network architectures
Data modeling
Convolution
Image quality
Image processing
Visualization