In this paper, we propose a method for reconstructing rough patterns from a texture image while retaining fine texture information using deep learning. The proposed method is based on a deep neural network called pix2pix, which can learn the conversion process between input and output images. In the previous method, which is an improvement on pix2pix, two images were input: an original texture image to be edited and an editing pattern image to be reflected in it. However, these two images alone do not accurately reproduce the color of the original texture image. Therefore, we improve the previous method and improve the reproducibility of the pattern by inputting a pattern image that links the input original texture image to the editing pattern image. The effectiveness of the improved method was verified through several experiments. Future work is to improve the quality of the generated images to improve blur and reproduce accurate input texture information.
Detection of main persons in a snapshot is proposed in this paper. The proposed method uses feature maps from Mask RCNN [1] and depth information from the depth estimation model [2] to detect main persons in the photo. Our research aims to construct a deep neural network model to estimate the main person degree map from these two inputs. Based on the estimated importance map, we performed post-processing and confirmed that only important persons could be extracted.
This paper presents a method for reconstructing texture patterns using deep learning. The proposed method is based on a deep neural network called pix2pix generative adversarial network (GAN) that is able to learn the conversion process between input and output images. It extends the pix2pix by adding constraints to the network to change the underline image pattern while retaining the input fine texture. Using texture images with underlying patterns and fine textures as test data, we verified the effectiveness of our modification through several computational experiments. Although the generated images can keep the input color and edge information, these images are blurred, and the input texture information cannot be reproduced in some cases. The latter problems need to be improved in the future research direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.