Prefiltering Cubemaps

1. Create PMREM(Prefiltered Mipmaped Radiance Environment map).
a) Get the hdr-map, as an example pisa.hdr(0) (bunch of here (1)).

pisa_latlong
b) Download HDR Shop (2) free edition.
c) Open HDR Shop.

  • open your .hdr – go to Image -> Panorama -> Panoramic Transform

Image

  • settings Source : Longitude,  Dest : Cube Env(Vertical Cross) – Convert

Transform

  • Save as .. *.HDR -> pisa_cross.hdr

d) Download ModifiedCubeMapGen-1_66 (3), many thanks to Sebastien Lagarde (4).

  • Load Cube Cross (choose pisa_cross.hdr)
  • Click on Filter Cubemap(to recieve better IBL resutl try to change settings)

Check

  •  Click CHECK on Save MipChain and SaveCubeMap(.dds) – pisa_cross.dds

2. Generate Sh-Coeff.
a) Upload to engine our PMREM(pisa_cross.dds)
b) Calculate coefficients of spherical harmonics from cube texture

Example with functions:restore lighting value from sh-coeff, for direction,  add and scale.

void sphericalHarmonicsEvaluateDirection(float * result, int order,
const Math::Vector3 & dir)
{
result[0] = 0.282095;
result[1] = 0.488603 * dir.y;
result[2] = 0.488603 * dir.z;
result[3] = 0.488603 * dir.x;
result[4] = 1.092548 * dir.x*dir.y;
result[5] = 1.092548 * dir.y*dir.z;
result[6] = 0.315392 * (3.f*dir.z*dir.z - 1.f);
result[7] = 1.092548 * dir.x * dir.z;
result[8] = 0.546274 * (dir.x*dir.x - dir.y*dir.y);
}

void sphericalHarmonicsAdd(float * result, int order,
const float * inputA, const float * inputB)
{
const int numCoeff = order * order;
for (int i = 0; i < numCoeff; i++)
{
result[i] = inputA[i] + inputB[i];
}
}

void sphericalHarmonicsScale(float * result, int order,
const float * input, float scale)
{
const int numCoeff = order * order;
for (int i = 0; i < numCoeff; i++)
{
result[i] = input[i] * scale;
}
}

void sphericalHarmonicsFromTexture(GLuint cubeTexture,
std::vector<Math::Vector3> & output, const uint order)
{
const uint sqOrder = order*order;

// allocate memory for calculations
output.resize(sqOrder);
std::vector<float> resultR(sqOrder);
std::vector<float> resultG(sqOrder);
std::vector<float> resultB(sqOrder);

// variables that describe current face of cube texture
GLubyte* data;
GLint width, height;
GLint internalFormat;
GLint numComponents;

// initialize values
float fWt = 0.0f;
for (uint i = 0; i < sqOrder; i++)
{
output[i].x = 0;
output[i].y = 0;
output[i].z = 0;
resultR[i] = 0;
resultG[i] = 0;
resultB[i] = 0;
}
std::vector<float> shBuff(sqOrder);
std::vector<float> shBuffB(sqOrder);

const GLenum cubeSides[6] = {
GL_TEXTURE_CUBE_MAP_POSITIVE_Y, // Top
GL_TEXTURE_CUBE_MAP_NEGATIVE_X, // Left
GL_TEXTURE_CUBE_MAP_POSITIVE_Z, // Front
GL_TEXTURE_CUBE_MAP_POSITIVE_X, // Right
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, // Back
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y // Bottom
};

// bind current texture
glBindTexture(GL_TEXTURE_CUBE_MAP, cubeTexture);

int level = 0;
// for each face of cube texture
for (int face = 0; face < 6; face++)
{
// get width and height
glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_HEIGHT, &height);

if (width != height)
{
return;
}

// get format of data in texture

glGetTexLevelParameteriv(cubeSides[face], level,
GL_TEXTURE_INTERNAL_FORMAT, &internalFormat);

// get data from texture
if (internalFormat == GL_RGBA)
{
numComponents = 4;
data = new GLubyte[numComponents * width * width];
}
else if (internalFormat == GL_RGB)
{
numComponents = 3;
data = new GLubyte[numComponents * width * width];
}
else
{
return;
}
glGetTexImage(cubeSides[face], level, internalFormat, GL_UNSIGNED_BYTE, data);

// step between two texels for range [0, 1]
float invWidth = 1.0f / float(width);
// initial negative bound for range [-1, 1]
float negativeBound = -1.0f + invWidth;
// step between two texels for range [-1, 1]
float invWidthBy2 = 2.0f / float(width);

for (int y = 0; y < width; y++)
{
// texture coordinate V in range [-1 to 1]
const float fV = negativeBound + float(y) * invWidthBy2;

for (int x = 0; x < width; x++)
{
// texture coordinate U in range [-1 to 1]
const float fU = negativeBound + float(x) * invWidthBy2;

// determine direction from center of cube texture to current texel
Math::Vector3 dir;
switch (cubeSides[face])
{
case GL_TEXTURE_CUBE_MAP_POSITIVE_X:
dir.x = 1.0f;
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = 1.0f - (invWidthBy2 * float(x) + invWidth);
//dir = -dir;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_X:
dir.x = -1.0f;
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = -1.0f + (invWidthBy2 * float(x) + invWidth);
//dir = dir;
break;
case GL_TEXTURE_CUBE_MAP_POSITIVE_Y:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f;
dir.z = -1.0f + (invWidthBy2 * float(y) + invWidth);
//dir = dir;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_Y:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = -1.0f;
dir.z = 1.0f - (invWidthBy2 * float(y) + invWidth);
//dir = dir; //!
break;
case GL_TEXTURE_CUBE_MAP_POSITIVE_Z:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = 1.0f;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_Z:
dir.x = 1.0f - (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = -1.0f;
break;
default:
return;
}

// normalize direction
dir = Math::Normalize(dir);
// scale factor depending on distance from center of the face
const float fDiffSolid = 4.0f / ((1.0f + fU*fU + fV*fV) *
sqrtf(1.0f + fU*fU + fV*fV));
fWt += fDiffSolid;

// calculate coefficients of spherical harmonics for current direction
sphericalHarmonicsEvaluateDirection(shBuff.data(), order, dir);

// index of texel in texture
uint pixOffsetIndex = (x + y * width) * numComponents;
// get color from texture and map to range [0, 1]
Math::Vector3 clr(
float(data[pixOffsetIndex]) / 255,
float(data[pixOffsetIndex + 1]) / 255,
float(data[pixOffsetIndex + 2]) / 255
);

//clr.x = pow(clr.x, 1.1f);
//clr.y = pow(clr.y, 1.1f);
//clr.z = pow(clr.z, 1.1f);

clr.x = clr.x*0.5f;
clr.y = clr.y*0.5f;
clr.z = clr.z*0.5f;

// scale color and add to previously accumulated coefficients
sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.x * fDiffSolid);
sphericalHarmonicsAdd(resultR.data(), order,
resultR.data(), shBuffB.data());

sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.y * fDiffSolid);
sphericalHarmonicsAdd(resultG.data(), order,
resultG.data(), shBuffB.data());

sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.z * fDiffSolid);
sphericalHarmonicsAdd(resultB.data(), order,
resultB.data(), shBuffB.data());
}
}

delete[] data;
}

// final scale for coefficients
const float fNormProj = (4.0f * Math::PI<float>()) / fWt;
sphericalHarmonicsScale(resultR.data(), order, resultR.data(), fNormProj);
sphericalHarmonicsScale(resultG.data(), order, resultG.data(), fNormProj);
sphericalHarmonicsScale(resultB.data(), order, resultB.data(), fNormProj);

// save result
for (uint i = 0; i < sqOrder; i++)
{
output[i].x = resultR[i];
output[i].y = resultG[i];
output[i].z = resultB[i];
}

//glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}

Thanks to Reev(5) from gamedev.ru

Example in-game SH-coeff shader:

const float PhongRandsConsts[32] =
{
0,
188,
137,
225,
99,
207,
165,
241,
71,
198,
151,
233,
120,
216,
177,
248,
50,
193,
145,
229,
110,
212,
171,
244,
86,
203,
158,
237,
129,
220,
182,
252
};

inline float tosRGBFloat(float rgba)
{
float srgb = (rgba*rgba)*(rgba*0.2848f + 0.7152f);
return srgb;
}

Math::Vector4* GetPhongRands()
{
static Math::Vector4 rands[32];
float r1 = 0.032f;

for (int it = 0, end = 32; it != end; ++it)
{
float r2 = tosRGBFloat(PhongRandsConsts[it] / 255.f);

//float r2 = Platform::Random::RandFloat(0.f, 1.0f);

rands[it].x = r1;
rands[it].y = r2;
rands[it].z = Math::Cos(2.f * Math::PI<float>() * r2);
rands[it].w = Math::Sin(2.f * Math::PI<float>() * r2);

r1 += 0.032f;
}

return rands;
}

Aberration

In real life we have a couple of abberations, such as Optical aberration, Aberattion of light, Relativistic aberration(0).
Aberration of light, also known as astranomical-stellar aberration relates to space objects,  which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. So we defenitely doesn’t need this in classic shooter game. Relativistic aberration refers to accepted physical regarding the relationship between space and time, wich refers to Einstein’s special theory of relativity(1). It’s very interesting btw…

Optical abberation also have several variations but we will implement the simple Chromatic aberration(2).

chromatic_

In the real world it occurs when a cheap lens is used with a short focal point. Light of different wavelengths focus at different distances making it hard to get a fully sharp image, and you end up with color bleeding.

aberration1

In games it usually appears in action moments. We have 3 chanells, one of them will be the same and two will have a small offset wich we can control.

Texture2D t_frame; // : register( t0 );

static const float aberration_r = aberration_parameters.x; //control amount of red color channel
static const float aberration_b = aberration_parameters.y; //control amount of blue color channel

float screen; //screen resolution
float2 uv; // uv coord
float3 fColor; // color or the frame
float2 uv_offset = uv - 0.5f; // main offset

float2 uv_offset_left = uv_offset * ( 1.0 + aberration_red * screen.z ) + 0.5;
float2 uv_offset_right = uv_offset * ( 1.0 - aberration_blue * screens.w ) + 0.5;

frame = float3(
t_frame.Sample( sampler, uv_offset_left, 0 ).r,
t_frame.Sample( sampler, uv, 0 ).g,
t_frame.Sampl( sampler, uv_offset_right, 0 ).b
);

Shadow problem

In shadow result with IBL lightning. Looks pretty dull, perhaps in life this will look as it is, but the game we would like to focus on weapon details even in shadow.

Shadow1

As an idea to create a special light source that will affect only specular and  use it in the shade.

Shadow2

diffuse_light = light_accum.diffuse * light_diffuse_factor; //diffuse influence factor
specular_light = light_accum.specular * light_specularfactor; //specular influence factor

 

factors

It’s simple to create and easy to use.

The Cursed Island: Mask of Baragus

ci-icon

Uncover the mystery of the disappearance in this incredible adventure. Explore the ancient city that is full of secrets and wonders. A captivating journey awaits you!

 

Game Designer, Scripter, Project Manager, Animator

Light Attenuation

Inverse square law (0)
inverse-square-law

Inverse square law: the arrows represent the flux emanating from the source. The density of arrow per unit area (green square, density flux), is decreasing with squared distance.

$$
Intensity = \frac{1}{{distance}^2}
$$

The apparent intensity of the light from the source of constant luminosity decreases with the square of the distance from the object.

The main problem of this physically correct attenuation is:

  • It takes forever to reach 0. Need to cull it somehow, in Unity this type gets culled at 1/256.
  • We need a lot of time LARGE lights, (x < 1 above)
  • In deffered lightning a lot of pixels getting processed for very little gain

Some variants with two good for us, RED and GREEN. PURPLE color curve have a huge falloff.

curves

One approach to solve this is to window the falloff in such a way that most of the function is unaffected (1). For this we will use a basic linear interpolation to zero based on distance criteria (2). I use also photometric terms (PBR!!):

$$
E= lerp(\frac{I}{{distance^2}}, 0, \frac{distance}{lightRadius})=(\frac{I}{distance^2})(1 – \frac{distance}{lightRadius})
$$

go to

$$
E_{v1}= (\frac{I}{distance^2})saturate(1 – (\frac{x^n}{lightRadius}))^2 – Frostbite 3
$$

  • n – tweak the transition smoothness (Frostbite = 4.0) from (1)

$$
E_{v2}= \frac{saturate(1 – (distance/lightRadius)^4)^2}{distance^2 + 1} – Unreal 4
$$

In real life there is no lightRadius but in games we may use it to prevent ingress of a large number of objects in the scene in the range of light (of course and their calculation) by reducing the radius.

Falloff_1 Falloff_2

The second one have the most acceptable variant with inverse square falloff we achieve more natural results, the first one is old style falloff.

Falloff_4 Falloff_5

Near light behaves more naturally, but still strong light on the side walls of the corridor, and now see the down variant.

Falloff_6

Rain Again

The task is simple – add some kind of detail (PBR) in the rain. So this we have from the start.

rain_1

Add a special option for materials to get wet (wood, asphalt and other rough surfaces become darker when it’s raining)

rain_2

Change the ground shader, underfoot the same bare ground.

rain_3

Slate in the rain becomes wet and darker but obviously  not shine as a metal.

float4 position_with_offset = float4( view_space_position + normal * 1.3f, 1.0f );
float4 projected_position = mul( view_to_shadow, position_with_offset );
float rain_mask = saturate( read_rain_mask( projected_position ) ); // rain mask, where rain effect visible
float roughness = gb.roughness;
float fresnel = gb.fresnel;
float metalness = saturate( ( dot( fresnel, 0.33 ) * 1000 - 500 ) );
float porosity = saturate( ( roughness - 0.5) / 0.1 );
float factor = lerp( 0.1, 1, metalness * porosity );
float3 to_eye = -view_space_position;
diffuse_result *= lerp ( 0.85, ( factor * 5 ), rain_mask );

rain_4

Add some details and dof. Perhaps that’s it.

rain_5

rain_1 rain_5

 

 

 

Looks better and this is well.

In-game result with tone mapping.

Energy Conservation

Energy conservation is a restriction on the reflection model that requires that the total amount of reflected light cannot be more than the incoming light (0).

$$
\int_{\Omega} f(\vec{x}, \phi, \theta) L_i \cos \theta \,\delta\omega\leq L_i
$$

  • f – BRDF(any?)

We have to calculate energy conservation for Diffuse and Specular and than combined model need  to fit this:

$$ C_d  C_s \leq 1 $$

This means that if you want to make material with more specular, you may have to reduce the diffuse.

BallsEnergy

Left: A sphere with a high diffuse intensity, and low specular intensity. Right: High specular intensity, low diffuse. Middle: Maximum diffuse AND specular intensity. Note how it looks blown out and too bright for the scene (1)

Normalization

The upper part of the table shows the nor­mal­ized reflec­tion den­sity func­tion (RDF). This is the prob­a­bil­ity den­sity that a pho­ton from the incom­ing direc­tion is reflected to the out­go­ing direc­tion, and is the BRDF times \cos \theta. Here, \theta is the angle between \mathbf{N} and \mathbf{L}, which is, for the assumed view posi­tion, also the angle between \mathbf{V} and \mathbf{L}, resp. \mathbf{R} and \mathbf{L}.

The lower part of the table shows the nor­mal­ized nor­mal dis­tri­b­u­tion func­tion (NDF) for a micro-​facet model. This is the prob­a­bil­ity den­sity that the nor­mal of a micro-​facet is ori­ented towards \mathbf{H}. It is the same expres­sion in spher­i­cal coor­di­nates than that for of the Phong RDF, just over a dif­fer­ent vari­able, \alpha, the angle between \mathbf{N} and \mathbf{H}. The height­field dis­tri­b­u­tion does it slightly dif­fer­ent, it nor­mal­izes the pro­jected area of the micro-​facets to the area of the ground plane (adding yet another cosine term). (3)

cornellebox_directlighting cornellebox_directlighting_unormalized

 

 

 

 

 

 

With and without energy conservation /π (4)

Beware to overdo the result with minimal maximum.

Common1Common2

Different decals depends on smoothmap

When it’s raining we need to have a different drops depends on material. To do this we can use the smoothmap to control surface, more smoother surface we have, more differen drops we will have.

PuddleSmooth1 PuddleSmoothMap

This is the result without smoothmap using. We see the same type drops everywhere.

PuddleSmooth2

Here we set the smoothmap to to determine where surface is rough and where no. Also we need to make a dual texture.

texture

PuddleSmooth3

And here is the finished result, now we can see that on the asphalt we have own drops and on the puddle own.

PuddleSmooth4

puddle2