“Graphics is a broad field; to understand it, you need information from perception, physics, mathematics, and engineering. Building a graphics application entails user-interface work, some amount of modeling (i.e., making a representation of a shape), and rendering (the making of pictures of shapes). Rendering is often done via a “pipeline” of operations; one can use this pipeline without understanding every detail to make many useful programs. But if we want to render things accurately, we need to start from a physical understanding of light. Knowing just a few properties of light prepares us to make a first approximate renderer.”
Recent research efforts in image synthesis have been directed toward the rendering of believable and predictable images of biological materials. This course addresses an important topic in this area, namely the predictive simulation of skin's appearance. The modeling approaches, algorithms and data examined during this course can be also applied to the rendering of other organic materials such as hair and ocular tissues.
This paper introduces a shading model for light diffusion in multi-layered translucent materials. Previous work on diffusion in translucent materials has assumed smooth semi-infinite homogeneous materials and solved for the scattering of light using a dipole diffusion approximation. This approximation breaks down in the case of thin translucent slabs and multi-layered materials. We present a new efficient technique based on multiple dipoles to account for diffusion in thin slabs. We enhance this multipole theory to account for mismatching indices of refraction at the top and bottom of of translucent slabs, and to model the effects of rough surfaces. To model multiple layers, we extend this single slab theory by convolving the diffusion profiles of the individual slabs. We account for multiple scattering between slabs by using a variant of Kubelka-Munk theory in frequency space. Our results demonstrate diffusion of light in thin slabs and multi-layered materials such as paint, paper, and human skin.
We have developed an analytical spectral shading model for human skin. Our model accounts for both subsurface and surface scattering. To simulate the interaction of light with human skin, we have narrowed the number of necessary parameters down to just four, controlling the amount of oil, melanin, and hemoglobin, which makes it possible to match specific skin types. Using these physicallybased parameters we generate custom spectral diffusion profiles for a two-layer skin model (shown in the top left figure) that account for subsurface scattering within the skin. We use the diffusion profiles in combination with a Torrance-Sparrow model for surface scattering to simulate the reflectance of the specific skin type.
Most face relighting methods are able to handle diffuse shadows, but struggle to handle hard shadows, such as those cast by the nose. Methods that propose techniques for handling hard shadows often do not produce geometrically consistent shadows since they do not directly leverage the estimated face geometry while synthesizing them. We propose a novel differentiable algorithm for synthesizing hard shadows based on ray tracing, which we incorporate into training our face relighting model. Our proposed algorithm directly utilizes the estimated face geometry to synthesize geometrically consistent hard shadows. We demonstrate through quantitative and qualitative experiments on Multi-PIE and FFHQ that our method produces more geometrically consistent shadows than previous face relighting methods while also achieving state-of-the-art face relighting performance under directional lighting. In addition, we demonstrate that our differentiable hard shadow modeling improves the quality of the estimated face geometry over diffuse shading models.
Related Works
Face Relighting; Differentiable Rendering and Ray Tracing
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization directly on the surface mesh. Finally, we introduce a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting. Experiments show our extracted models used in advanced scene editing, material decomposition, and high quality view interpolation, all running at interactive rates in triangle-based renderers (rasterizers and path tracers). Project website: https://nvlabs.github.io/nvdiffrec/ .