Back to the main site


Physically Based Shading and Reflection Mapping

Posted on by Malcolm

In my latest assignment for my final year Real-Time Graphics course we had to write a paper discussing physically based shading and reflection mapping in a deferred rendering context. Our target audience was someone who is already comfortable with basic 3D rendering concepts (blinn-phong shading, multi-pass rendering etc). This article is an adapted version that concentrates explaining both topics as a concept and I had no previous knowledge of either topic before I started my research and it took me approx 2 weeks to complete this section.
 
In recent years we have seen a shift in the 3D rendering towards Physically Based Shading (PBS) because it allows us to achieve a more photo-realistic look in our real-time and offline rendering compared to previous techniques.

Background knowledge
 
PBS allows more realistic and rich visuals compared to traditional shading methods by approaching the implementation of lighting in a way which is closer to the actual physics of light.
 
It postulates that, as light photons are being scattered from a light source there are two types of light that will light a scene; direct light and indirect light. Direct light is where the photons take a straight path from the light source to the object(s) in the scene and indirect light is the product of light arriving at the object(s) after having been reflected (from one to many times) around in the scene.
 
The process of the reflection involves a portion of the light being “absorbed” by the surface and the rest being reflected back out into the scene. The material property which we attribute how much of the light is reflected by a surface is called the “albedo”. Note that this is not the same as the definition “diffuse albedo” of a material in computer graphics (i.e. the diffuse colour of a lambertian surface).
 
An important concept to note that the total energy of light coming from the scene is never greater than the total energy of light coming out of the scene. This contrasts to traditional shading models where we need to invent light (such as ambient light) in order for our rendering to be visually pleasing.
 
Physically based shading
 
Early attempts at PBS was based on radiosity which tried to simulate every path a light could take but this proved to be too computationally expensive and has complex problems with occlusion.
 
Practical implementations of PBS derive from ray tracing methods where, instead of calculating light as it is propagated from the light source to the camera, we calculate the path of the light from the camera to the light source (this includes indirect light that has been reflected from a surface in the scene). This minimises the computation by allowing us to only concern ourselves with the light we see.
 
In order to achieve acceptable real-time PBS we use Bi-directional Reflectance Distribution Functions (BRDFs) to approximate how much light energy is reflected by the materials in the scene. A popular BRDF function used in PBS derives from the Cook-Torrance model for specular lighting.

A BRDF function

Fig 1. A BRDF function


Here we can see a basic visualisation of a BRDF which takes the vectors from the point on a surface to the camera and to the light, as well as the surface normal, to output how much light is reflected on an opaque surface.
 
Cook-Torrance Model
 
The basis for most specular terms in physically based shading is microfacet theory and the Cook-Torrance model is a BRDF specular-microfacet shading model. The microfacet model assumes that, for a surface (i.e. a pixel), many portions of a surface (or “microfacet”) will contribute to the colour of that surface. The normal of a microfacet can be defined as a vector halfway between the light and view vectors which is why it is sometimes referred to as the “half-vector” and is calculated as shown in figure 2 and we can see its use in the Cook-Torrance model in figure 3.
Microfacet normal

Fig 2. Microfacet normal


Cook-Torrance Model

Fig 3. Cook-Torrance Model


  • l = negative light direction
  • v = direction of view
  • n = surface normal


The Cook-Torrance model is not concerned with diffuse light. We could use a BRDF function could be use to calculate the diffuse however the visual quality would not be worth the extra computational cost (Karis, 2013) so we can therefore use a lambertian diffuse term and still produce high visual quality.
Lambertian diffuse term

Fig 4. Lambertian diffuse where Cdiff is the material diffuse albedo


It is important to note that specular light is the dominant component of visible lighting and that every material has a specular response to light (Drobot, 2013).
 
Fig 5 shows the isolated the component in the Cook-Torrance model that is responsible for the microfacet derivation:
Microfacet derivation

Fig 5. Microfacet derivation


Next we will examine the D, F, and G terms in more detail. D and G are calculated independently and an advantage of the Cook-Torrance model is being able to take different approaches to calculating these terms (for example, from different microfacet models) to fit the visual output as desired.
 
The D term is the Normal distribution function (NDF) component and it is responsible for determining the initial proportion of light that is on the surface. GGX/Trowbridge-Reitz has shown to be an efficient method (Karis, 2013; Burley 2012) which takes into account the roughness of a material.
GGX/Trowbridge-Reitz NDF

Fig 6. GGX/Trowbridge-Reitz NDF


  • a = (material roughness)²


Figure 6 shows the approach that has been adopted by Epic and Disney and the quality of visual output it produces has proved to be worth the computational cost (Karis, 2013) and the increased visual quality is preferable over other NDFs (Burley, 2012).
 
The F term represents the Fresnel (pronouced “Fre-nel”) approximation. The Fresnel reflection factor represents how specular light is reflected on a surface with respect to the angle of intersection between the view and the light on the surface and assumes that smooth surfaces will reach 100% specular reflection at grazing angles. Very rough surfaces will still have an increased specular light at these grazing angles but full reflection is not possible. The Schlick-Fresnel method seems to be the most popular (Lazarov, 2013) but Epic introduced a spherical-gaussian approximation to increase efficiency with no visual drawbacks (Karis, 2013).
Schlick-Fresnel method with spherical-gaussian approximation

Fig 7. Schlick-Fresnel method with spherical-gaussian approximation


  • F0 = specular reflectance at normal incidence


The G term is the geometric attenuation/self-shadowing factor also known as the visibility function. It represents of the probability that the microfacet, taking into account the microfacet normal, is visible from the direction of both the camera and the light. It has a significant effect on the surface albedo and therefore surface appearance (Burley, 2012). The most common approach to the G term is based on the Schlick-Smith model and this is mostly due to it being the only feasible model that takes into account the direction of the light and camera and the surface roughness (McAuley, 2012).
Karis' implementation of a Schlick-Smith visibility function

Fig 8. Karis’ implementation of a Schlick-Smith visibility function (Karis, 2013)


It is worth mentioning that the calculation of k has been modified in order to fit both the Schlick and Smith model implementation.
 
One implementation has proved faster than Schlick-Smith through approximation. While it did not have the same visual quality the speed of the algorithm was considered worth the cost (Lazarov, 2013). It is worth noting that this approximation fit a Blinn-Phong based NDF.
 
Reflection Mapping
 
Reflection mapping, or environment mapping, can be summarised as a process which commits the render of a scene from a camera into a texture for later use. In the case of real-time reflections we can render the scene from all six sides of the camera and store them in a single texture. (Karis, 2013) In this paper we will be using a cubemap (a texture which represents the union of all six directions; positive and negative directions in the x, y and z axis) to represent the reflection map. We can then use this cubemap to store the local reflections of an object apply it to the reflective surface when we do the final render for the frame from the position of the main player camera (Lagarde, 2012).
Example cube map

Fig 9. Example cube map. The camera position is situated in the center and is surrounded by the shapes seen above.


The technique used to refer to the process of using a texture to project light onto a surface is called Image Based Lighting (IBL) and is especially useful when trying to simulate light from distant objects (Hoffman, 2010) or implement local reflections. A key concept in IBL is the need to create a new co-ordinate system in order to translate the position of the pixel on the reflector object to the corresponding position on the cubemap.Fortunately this is easily achieved with a few extra computations in our shaders (Bjorke, 2004).
 
Since IBL can produce a texture that has multiple light sources with different properties, such as distance and intensity, the radiance integral (radiance being light being projected away from the texture) will need to be solved when used with a physically based shading model and there have been optimised implementations using a split sum method and Importance Sampling to solve the required numerical integration of radiance with a pre-filtered cubemap (Karis, 2013).
 
References:
Bjorke, K. (2004) Chapter 19. Image-Based Lighting.
Burley, B. (2012) Physically-Based Shading at Disney. SIGGRAPH 2012 Course Notes.
Carmack, J. (2013) The Physics of Lighting & Rendering at Quakecon 2013.
Drobot, M. (2013) Lighting of Killzone: Shadow Fall.
Hoffman, N. (2010) Background: Physically-Based Shading. SIGGRAPH 2010 Course Notes.
Karis, B. (2013) Real Shading in Unreal Engine 4. SIGGRAPH 2013 Course Notes.
Karis, B. (2013) Specular BRDF Reference.
Lagarde, S. (2013) Water drop 3b – Physically based wet surfaces.
Lagarde, S. & Zanuttini, A. (2012) Local Image-based Lighting With Parallax-corrected Cubemap.
Lazarov, D. (2013) Getting More Physical in Call of Duty: Black Ops II. SIGGRAPH 2013 Course Notes.
McAuley, S. (2012) Calibrating Lighting and Materials in Far Cry 3. SIGGRAPH 2012 Course Notes.
 
Media:
Fig 1. picture taken from http://commons.wikimedia.org/wiki/File:BRDF_Diagram.svg
Fig 2 – 8. Equation pictures generated using http://www.codecogs.com/latex/eqneditor.php
Fig 9. picture adapted from http://commons.wikimedia.org/wiki/File:Panorama_cube_map.png
 
All hyperlinks last accessed 10th January 2014.

This entry was posted in University Work and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>