Light Attenuation

Inverse square law (0)

Inverse square law: the arrows represent the flux emanating from the source. The density of arrow per unit area (green square, density flux), is decreasing with squared distance.

$$Intensity = \frac{1}{{distance}^2}$$

The apparent intensity of the light from the source of constant luminosity decreases with the square of the distance from the object.

The main problem of this physically correct attenuation is:

• It takes forever to reach 0. Need to cull it somehow, in Unity this type gets culled at 1/256.
• We need a lot of time LARGE lights, (x < 1 above)
• In deffered lightning a lot of pixels getting processed for very little gain

Some variants with two good for us, RED and GREEN. PURPLE color curve have a huge falloff.

One approach to solve this is to window the falloff in such a way that most of the function is unaffected (1). For this we will use a basic linear interpolation to zero based on distance criteria (2). I use also photometric terms (PBR!!):

$$E= lerp(\frac{I}{{distance^2}}, 0, \frac{distance}{lightRadius})=(\frac{I}{distance^2})(1 – \frac{distance}{lightRadius})$$

go to

$$E_{v1}= (\frac{I}{distance^2})saturate(1 – (\frac{x^n}{lightRadius}))^2 – Frostbite 3$$

• n – tweak the transition smoothness (Frostbite = 4.0) from (1)

$$E_{v2}= \frac{saturate(1 – (distance/lightRadius)^4)^2}{distance^2 + 1} – Unreal 4$$

In real life there is no lightRadius but in games we may use it to prevent ingress of a large number of objects in the scene in the range of light (of course and their calculation) by reducing the radius.

The second one have the most acceptable variant with inverse square falloff we achieve more natural results, the first one is old style falloff.

Near light behaves more naturally, but still strong light on the side walls of the corridor, and now see the down variant.

Rain Again

The task is simple – add some kind of detail (PBR) in the rain. So this we have from the start.

Add a special option for materials to get wet (wood, asphalt and other rough surfaces become darker when it’s raining)

Change the ground shader, underfoot the same bare ground.

Slate in the rain becomes wet and darker but obviously  not shine as a metal.

float4 position_with_offset = float4( view_space_position + normal * 1.3f, 1.0f );
float4 projected_position = mul( view_to_shadow, position_with_offset );
float roughness = gb.roughness;
float fresnel = gb.fresnel;
float metalness = saturate( ( dot( fresnel, 0.33 ) * 1000 - 500 ) );
float porosity = saturate( ( roughness - 0.5) / 0.1 );
float factor = lerp( 0.1, 1, metalness * porosity );
float3 to_eye = -view_space_position;
diffuse_result *= lerp ( 0.85, ( factor * 5 ), rain_mask );

Add some details and dof. Perhaps that’s it.

Looks better and this is well.

In-game result with tone mapping.

Energy Conservation

Energy conservation is a restriction on the reflection model that requires that the total amount of reflected light cannot be more than the incoming light (0).

$$\int_{\Omega} f(\vec{x}, \phi, \theta) L_i \cos \theta \,\delta\omega\leq L_i$$

• f – BRDF(any?)

We have to calculate energy conservation for Diffuse and Specular and than combined model need  to fit this:

$$C_d C_s \leq 1$$

This means that if you want to make material with more specular, you may have to reduce the diffuse.

Left: A sphere with a high diffuse intensity, and low specular intensity. Right: High specular intensity, low diffuse. Middle: Maximum diffuse AND specular intensity. Note how it looks blown out and too bright for the scene (1)

The upper part of the table shows the nor­mal­ized reflec­tion den­sity func­tion (RDF). This is the prob­a­bil­ity den­sity that a pho­ton from the incom­ing direc­tion is reflected to the out­go­ing direc­tion, and is the BRDF times . Here,  is the angle between  and , which is, for the assumed view posi­tion, also the angle between  and , resp.  and .

The lower part of the table shows the nor­mal­ized nor­mal dis­tri­b­u­tion func­tion (NDF) for a micro-​facet model. This is the prob­a­bil­ity den­sity that the nor­mal of a micro-​facet is ori­ented towards . It is the same expres­sion in spher­i­cal coor­di­nates than that for of the Phong RDF, just over a dif­fer­ent vari­able, , the angle between  and . The height­field dis­tri­b­u­tion does it slightly dif­fer­ent, it nor­mal­izes the pro­jected area of the micro-​facets to the area of the ground plane (adding yet another cosine term). (3)

With and without energy conservation /π (4)

Beware to overdo the result with minimal maximum.

Different decals depends on smoothmap

When it’s raining we need to have a different drops depends on material. To do this we can use the smoothmap to control surface, more smoother surface we have, more differen drops we will have.

This is the result without smoothmap using. We see the same type drops everywhere.

Here we set the smoothmap to to determine where surface is rough and where no. Also we need to make a dual texture.

And here is the finished result, now we can see that on the asphalt we have own drops and on the puddle own.

IBL

Image Based Lights (IBLs) allow to represent the incident lughtning surrounding a point. Often use Diffuse and Specular cubemaps.

To use commom PBR idea BRDF shading we need to solve the the radiance integral. We will use importance sampling (0). The following equation describes this numerical integration:

$$\int_H L_i(l) f(l,v)cos\theta_l\mathrm{d}l \approx \frac{1}{N}\sum\limits_{k=1}^N\frac{L_i(l_k)f(l_k,v)cos\theta_{l_k}}{p(l_k,v)}$$

How Radiance traces rays to determine the incident illumination on a surface from an IBL environment (1)

The main problem is that even with importance sampling we still need to take a lot of samples! We can use MIP maps to reduce sample count, but counts still need to be greateer that 16 for approriate quality. As it based on blending between many environment maps per pixel for local reflections, we can only practically afford a single sample for each.

Unreal 4 Split Sum Approximation

$$\frac{1}{N}\sum\limits_{k=1}^N\frac{L_i(l_k)f(l_k,v)cos\theta_{l_k}}{p(l_k,v)} \approx {\left( \frac{1}{N}\sum\limits_{k=1}^N)L_i(l_k \right) }{\left( \frac{1}{N}\sum\limits_{k=1}^N\frac{f(l_k,v)cos\theta_{l_k}}{p(l_k,v)} \right) }$$

First sum pre-calculate for different roughness values and store the results in the mip-map levels of cubemap (typical approach). In Unreal4 minor difference is they convolve the environment map with the GGX distribution of their shading model using importance sampling.

Second sum includes Environment BRDF – we may keep the same as integrating specular BRDF with a $$L_i(l_k) = 1$$

In remember me (2) they also approximate by considering two parts

• The lighting is pre-integrated offline with the normal distribution function (NDF) of the BRDF and the cosine from Lambert’s law for different Specular Power values.
• The result is stored in a prefiltered mipmap radiance environment map (PMREM) which is a cubemap texture where each mipmap store the result of the pre-integration for a lower roughness (example selecting mip as a linear function of roughness-  texCUBElod(uv, float4(R, nMips – gloss * nMips))(3) )

Local cubemaps blending (3)

Let’s use Hammersley (4) to generate psuedo random for 2-D coordinates (this will give use the well spaced coord), the GGX importance sample function (5)  will gathers into a region thats going to contribute the most for the given roughness. HLSL Epic Karis :

float radicalInverse_VdC(uint bits)
{
bits = (bits << 16u) | (bits >> 16u);
bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);

return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}
float2 Hammersley(uint i, uint n)
{
}

ImportanceSampleGGX (6)

float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{

float a = pow(Roughness + 1, 2);
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H = 0;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;

float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}

We can use and need to try couple aproxx.

// Smith aproxx
float GGX(float nDotV, float a)
{
float aa = a*a;
float oneMinusAa = 1 - aa;
float nDotV2 = 2 * nDotV;
float root = aa + oneMinusAa * nDotV * nDotV;
return nDotV2 / (nDotV + sqrt(root));
}

// shlick aproxx
float GGX(float NdotV, float a)
{
float k = (a * a) / 2;
return NdotV / (NdotV * (1.0f - k) + k);
}

float G_Smith(float a, float nDotV, float nDotL)
{

return GGX(nDotL,a) * GGX(nDotV,a);
}

The finished function

float2 IntegrateBRDF( float Roughness, float NoV )
{
float3 V;
V.x = sqrt( 1.0f - NoV * NoV ); // sin
V.y = 0;
V.z = NoV;
// cos
float A = 0;
float B = 0;
const uint NumSamples = 128;
[loop]
for( uint i = 0; i < NumSamples; i++ )  {  	float2 Xi = Hammersley( i, NumSamples ); 	float3 H = ImportanceSampleGGX( Xi, Roughness, float3(0,0,1) ); 	float3 L = 2 * dot( V, H ) * H - V; 	float NoL = saturate( L.z ); 	float NoH = saturate( H.z ); 	float VoH = saturate( dot( V, H ) ); [branch] 	if( NoL > 0 )
{
float G = G_Smith( Roughness, NoV, NoL );
float G_Vis = G * VoH / (NoH * NoV);
float Fc = pow( 1 - VoH, 5 );
A += (1 - Fc) * G_Vis;
B += Fc * G_Vis;
}
}
return float2( A, B ) / NumSamples;
}

Example of function call

float env_brdf = IntegrateBRDF(roughness, NoV);

Pre-calculate the first sum for different roughness values and store the results in the mip-map
levels of a cubemap.

float3 PrefilterEnvMap( float Roughness, float3 R )
{
float3 N = R;
float3 V = R;
float3 PrefilteredColor = 0;
const uint NumSamples = 128;

for( uint i = 0; i < NumSamples; i++ )  { 	float2 Xi = Hammersley( i, NumSamples ); 	float3 H = ImportanceSampleGGX( Xi, Roughness, N ); 	float3 L = 2 * dot( V, H ) * H - V; 	float NoL = saturate( dot( N, L ) );   if( NoL > 0 )
{
PrefilteredColor += probe_decode_hdr( t_probe_cubemap.SampleLevel( s_base, vec, mip_index ) ).rgb * NoL;
TotalWeight += NoL;
}
}
return PrefilteredColor / TotalWeight;
}

Final function:

float3 ApproximateSpecularIBL( float3 SpecularColor , float Roughness, float3 N, float3 V )
{
float NoV = saturate( dot( N, V ) );
float3 R = 2 * dot( V, N ) * N - V;
float3 PrefilteredColor = PrefilterEnvMap( Roughness, R );
float2 envBRDF = IntegrateBRDF( Roughness, NoV );
return PrefilteredColor * ( SpecularColor * envBRDF.x + envBRDF.y );
}