IBL

Image Based Lights (IBLs) allow to represent the incident lughtning surrounding a point. Often use Diffuse and Specular cubemaps. To use commom PBR idea BRDF shading we need to solve the the radiance integral. We will use importance sampling (0). The following equation describes this numerical integration:

$$\int_H L_i(l) f(l,v)cos\theta_l\mathrm{d}l \approx \frac{1}{N}\sum\limits_{k=1}^N\frac{L_i(l_k)f(l_k,v)cos\theta_{l_k}}{p(l_k,v)}$$ How Radiance traces rays to determine the incident illumination on a surface from an IBL environment (1)

The main problem is that even with importance sampling we still need to take a lot of samples! We can use MIP maps to reduce sample count, but counts still need to be greateer that 16 for approriate quality. As it based on blending between many environment maps per pixel for local reflections, we can only practically afford a single sample for each.

Unreal 4 Split Sum Approximation

$$\frac{1}{N}\sum\limits_{k=1}^N\frac{L_i(l_k)f(l_k,v)cos\theta_{l_k}}{p(l_k,v)} \approx {\left( \frac{1}{N}\sum\limits_{k=1}^N)L_i(l_k \right) }{\left( \frac{1}{N}\sum\limits_{k=1}^N\frac{f(l_k,v)cos\theta_{l_k}}{p(l_k,v)} \right) }$$

First sum pre-calculate for different roughness values and store the results in the mip-map levels of cubemap (typical approach). In Unreal4 minor difference is they convolve the environment map with the GGX distribution of their shading model using importance sampling.

Second sum includes Environment BRDF – we may keep the same as integrating specular BRDF with a $$L_i(l_k) = 1$$

In remember me (2) they also approximate by considering two parts

• The lighting is pre-integrated offline with the normal distribution function (NDF) of the BRDF and the cosine from Lambert’s law for different Specular Power values.
• The result is stored in a prefiltered mipmap radiance environment map (PMREM) which is a cubemap texture where each mipmap store the result of the pre-integration for a lower roughness (example selecting mip as a linear function of roughness-  texCUBElod(uv, float4(R, nMips – gloss * nMips))(3) )

Local cubemaps blending (3)

Let’s use Hammersley (4) to generate psuedo random for 2-D coordinates (this will give use the well spaced coord), the GGX importance sample function (5)  will gathers into a region thats going to contribute the most for the given roughness. HLSL Epic Karis :

{
bits = (bits << 16u) | (bits >> 16u);
bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);

return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}
float2 Hammersley(uint i, uint n)
{
}

ImportanceSampleGGX (6)

float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{

float a = pow(Roughness + 1, 2);
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H = 0;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;

float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}

We can use and need to try couple aproxx.

// Smith aproxx
float GGX(float nDotV, float a)
{
float aa = a*a;
float oneMinusAa = 1 - aa;
float nDotV2 = 2 * nDotV;
float root = aa + oneMinusAa * nDotV * nDotV;
return nDotV2 / (nDotV + sqrt(root));
}

// shlick aproxx
float GGX(float NdotV, float a)
{
float k = (a * a) / 2;
return NdotV / (NdotV * (1.0f - k) + k);
}

float G_Smith(float a, float nDotV, float nDotL)
{

return GGX(nDotL,a) * GGX(nDotV,a);
}

The finished function

float2 IntegrateBRDF( float Roughness, float NoV )
{
float3 V;
V.x = sqrt( 1.0f - NoV * NoV ); // sin
V.y = 0;
V.z = NoV;
// cos
float A = 0;
float B = 0;
const uint NumSamples = 128;
[loop]
for( uint i = 0; i < NumSamples; i++ )  {  	float2 Xi = Hammersley( i, NumSamples ); 	float3 H = ImportanceSampleGGX( Xi, Roughness, float3(0,0,1) ); 	float3 L = 2 * dot( V, H ) * H - V; 	float NoL = saturate( L.z ); 	float NoH = saturate( H.z ); 	float VoH = saturate( dot( V, H ) ); [branch] 	if( NoL > 0 )
{
float G = G_Smith( Roughness, NoV, NoL );
float G_Vis = G * VoH / (NoH * NoV);
float Fc = pow( 1 - VoH, 5 );
A += (1 - Fc) * G_Vis;
B += Fc * G_Vis;
}
}
return float2( A, B ) / NumSamples;
}

Example of function call

float env_brdf = IntegrateBRDF(roughness, NoV);

Pre-calculate the first sum for different roughness values and store the results in the mip-map
levels of a cubemap.

float3 PrefilterEnvMap( float Roughness, float3 R )
{
float3 N = R;
float3 V = R;
float3 PrefilteredColor = 0;
const uint NumSamples = 128;

for( uint i = 0; i < NumSamples; i++ )  { 	float2 Xi = Hammersley( i, NumSamples ); 	float3 H = ImportanceSampleGGX( Xi, Roughness, N ); 	float3 L = 2 * dot( V, H ) * H - V; 	float NoL = saturate( dot( N, L ) );   if( NoL > 0 )
{
PrefilteredColor += probe_decode_hdr( t_probe_cubemap.SampleLevel( s_base, vec, mip_index ) ).rgb * NoL;
TotalWeight += NoL;
}
}
return PrefilteredColor / TotalWeight;
} Final function:

float3 ApproximateSpecularIBL( float3 SpecularColor , float Roughness, float3 N, float3 V )
{
float NoV = saturate( dot( N, V ) );
float3 R = 2 * dot( V, N ) * N - V;
float3 PrefilteredColor = PrefilterEnvMap( Roughness, R );
float2 envBRDF = IntegrateBRDF( Roughness, NoV );
return PrefilteredColor * ( SpecularColor * envBRDF.x + envBRDF.y );
}