In this study, we propose an efficient city-generation method based on user sketches. The proposed framework combines Conditional Generative Adversarial Networks(cGAN) and procedural modeling, which we call the Neurosymbolic Model. For cGAN training, the data set needs to consist of linked input and output pairs, so first the building of random height is generated using Perlin noise as the training data set. Then, the building contours are extracted by morphological transformation. For training, we use pairs of height maps created from the city data and sketches extracted by morphological transformation. Allowing users to generate diverse and satisfying cities from freehand sketches.
KEYWORDS: 3D modeling, 3D scanning, Data modeling, Augmented reality, Telecommunications, Light sources, Solid modeling, Cameras, Prototyping, Projection systems
A shadow implicitly represents the existence of a human or object for various applications in interaction and media art designs. However, it is challenging to generate a natural shadow in artificial for a spatial augmented reality where the conventional approaches ignore the object dynamics and automatic control. In this work, we propose an interactive shadow generation system that creates the interactive shadow of users using a projector-camera system. With the offline processes of human mesh modeling and virtual environment registration, the proposed system rigs the 3D model created by scanning the user to generate the shadow. Finally, the generated shadow is projected into the real environment. We verify the usability of the proposed system and the impression of the generated shadow from our user study.
KEYWORDS: 3D modeling, Animal model studies, Data modeling, 3D image processing, Image retrieval, 3D displays, Human-machine interfaces, 3D vision, Image processing, Switches
In this work, we propose an interactive drawing guidance interface with 3D animal model retrieval, which aims to help common users draw 2D animal sketches by exploring the desired animal models from the pre-collected dataset. We first construct an animal model dataset and generate line drawing images of 3D models from different viewpoints. Then, we develop the drawing interface, which illustrates the retrieval models through matching freehand sketch inputs with line drawing images. We utilize the state-of-the-art sketch-based image retrieval algorithm for sketch matching, which describes the appearance and relative positions of multiple objects by measuring compositional similarity. The proposed system can accurately retrieve similar partial images and provide the blended shadow guidance underlying the user’s strokes to guide the drawing process. We verified that the proposed interface could improve the drawing quality of users’ animal sketches from our user study.
Normal map is an important and efficient way to represent complex 3D models. A designer may benefit from the auto-generation of high quality and accurate normal maps from freehand sketches in 3d content creation. This paper proposes a deep generative model for generating normal maps from users’ sketch with geometric sampling. Our generative model is based on conditional generative adversarial network with the curvature-sensitive points sampling of conditional masks. This sampling process can help eliminate the ambiguity of generation results as network input. It is verified that the proposed framework can generate more accurate normal maps.
Rapid progress has been made in both augmented and virtual reality technologies. However, it is still challenging to seamlessly connect the virtual world with the real world, such as locating virtual three-dimensional models in the real environment and directly interacting with them. In this study, we propose a wearable augmented reality system with a proposed head-mounted device to enable the projection of anamorphic images. The proposed system can track the user's head movements, and then project the designated scene in real space in real time. To achieve this goal, our system consists of three steps: room scaling, blur correction for the projected contents, and calibration using dynamic mesh generation. We evaluated the proposed system by the interaction with virtual contents and characters. The evaluation results showed that the interaction with the virtual character was highly evaluated. This system can be used for a wide range of applications in daily life and entertainment, such as relieving loneliness and serving as a guide in museums.
Audiovisual feedback is one of the most essential elements in video games. Although most feedback does not affect game mechanics, it is believed to have an impact on a player’s experience. One of the concepts in audiovisual feedback is “juiciness,” or providing excessive positive feedback. A few empirical studies have analyzed how juiciness affects the performance and behavior of players. Yet there are few studies on how the kind of feedback and its timing affect player behavior and performance in video games. This paper focuses on the audiovisual feedbacks given to players when they fail in a game. If players experience many failures during gameplay, their behavior may change depending on the audiovisual feedback they receive at the time of failure. We hypothesized that if the feedback to failures was “feelgood,” a player would be motivated to replay the game and try to improve performance. In the first phase of the experiment, five different feedback patterns were prepared and measured by quantifying impressions using the semantic differential method. An A/B test was then conducted using a simple web action game. Players were presented with different audiovisual feedback patterns for each of five groups when they failed in the game. The analysis in multiple comparisons revealed no significant differences in spontaneous replay behavior. The test did, however, result in differences in players’ average scores, suggesting that audiovisual feedback on failure could affect player performance.
A method is given for synthesizing a texture by using the interface of a conventional drawing tool. The majority of conventional texture generation methods are based on the procedural approach, and can generate a variety of textures that are adequate for generating a realistic image. But it is hard for a user to imagine what kind of texture will be generated simply by looking at its parameters. Furthermore, it is difficult to design a new texture freely without a knowledge of all the procedures for texture generation. Our method offers a solution to these problems, and has the following four merits: First, a variety of textures can be obtained by combining a set of feature lines and attribute functions. Second, data definitions are flexible. Third, the user can preview a texture together with its feature lines. Fourth, people can design their own textures interactively and freely by using the interface of a conventional drawing tool. For users who want to build this texture generation method into their own programs, we also give the language specifications for generating a texture. This method can interactively provide a variety of textures, and can also be used for typographic design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.