Fabric is one of the most common materials in our everyday lives, and accurately simulating the appearance of cloth is a critical problem in graphics, design, and virtual prototyping. But modeling and rendering fabric is very challenging because fabrics have a very complex structure, and this structure plays an important role in their visual appearance—cloth is made of fibers that are twisted into yarns which are woven into patterns. Light interacting with this complex structure produce the characteristic visual appearance that humans recognize as silk, cotton, or wool.
In this paper we present an end-to-end pipeline to model and render fabrics: we introduce a novel modality to create volume models of fabric at micron resolution using CT technology coupled with photographs; a new technique to synthesize models of user-specified designs from such CT scans; and finally, an efficient algorithm to render these complex volumetric models for practical applications. This pipeline produces the most realistic images of virtual cloth to date, and opens the way to bridging the gap between real and virtual fabric appearance.
In computer graphics, rendering algorithms are used to simulate the appearance of objects and materials in a wide
range of applications. Designers and manufacturers rely entirely on these rendered images to previsualize scenes and
products before manufacturing them. They need to differentiate between different types of fabrics, paint finishes,
plastics, and metals, often with subtle differences, for example, between silk and nylon, formaica and wood. Thus,
these applications need predictive algorithms that can produce high-fidelity images that enable such subtle material
discrimination.
KEYWORDS: Visualization, Computer graphics, Light sources and illumination, Visual compression, Algorithm development, Human vision and color perception, Photography, Visual system, 3D modeling, Human-machine interfaces
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics,
where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In
this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted
an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic
methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties,
and lighting.
We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded
the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material /
lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images.
We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression
size, and did not find them to be very correlated. Understanding the differences between these measures can lead to
the design of more efficient rendering algorithms in computer graphics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.