Spec BRDF ref

////////////////////////
//LD - Light direction//
//VD - View direction //
//N  - Normal         //
//a  - Roughness      //	
////////////////////////
float pi				= 3.14159265358979323;
float a 				= roughness * roughness;
float3 H				= normalize(LD + VD);
float HdotN	  			= dot( N, H );
float VdotH				= dot( VD, H );
float VdotN				= saturate(dot( N, VD ));
float LdotN				= saturate( dot( LD, N ));
float LdotH				= saturate(dot( N, H ));

//Normal Distribution Function//
////////////BECKMANN////////////
float NDF_Beckmann = 1.0 / ( pi * a * a ( pow( NdotH, 4.0f ) ) ) * exp ( ( pow( NdotH, 2.0f) - 1.0f ) / a * a * ( pow( NdotH, 2.0f ) ) );

////////////////////////////////
//////////BLINN_PHONG///////////
float NDF_BlinnPhong = ( 1.0f / ( pi * a * a ) ) * ( pow( NdotH, 2.0 / ( a * a ) - 2.0f ) );

////////////////////////////////
////GGX(TROWBRIDGE-REITZ)///////
float NDF_TrowReitz = ( a * a ) / pi * ( pow( (pow( NdotH, 2.0f ) * ( a * a - 1.0f ) + 1.0f ), 2.0) );


////////////////////////////////
////////////G-TERM//////////////
//////SIMPLE(Implicit)//////////
float G_Simple = LdotN * VdotN;

////////////////////////////////
//////////KELEMEN///////////////
float G_Kelemen = ( LdotN * VdotN ) / ( pow( VdotH, 2.0f ) );

////////////////////////////////
////////////NEUMANN/////////////
float G_Neumann = ( LdotN * VdotN ) / max( LdotN, VdotN );

////////////////////////////////
////////BECKMANN////////////////
float G_Beckmann = 0.0;
float c = VdotN / ( a * sqrt( 1.0f - ( pow( VdotN, 2.0f ) ) ) );
if ( c < 1.6) {
	G_Beckmann = ( 3.535f * c + 2.181 * pow(c, 2.0f ) ) / ( 1.0f + 2.276 * c + 2.577 * pow(c, 2.0f ) );
}
else {
	G_Beckmann = 1.0f;
}

//////////////////////////////// 
////////COOK-TORRANCE///////////
float G_CookTorr = min( 1.0f, min( ( 2.0f * HdotN * VdotH ) / VdotH, ( 2.0f * HdotN * LdotN ) / VdotH ) );

////////////////////////////////
/////////GGX(SMITH-WALTER)////////////
float G_GGXL = ( 2.0f * LdotN ) / ( LdotN + sqrt(a * a + ( 1.0f - a * a ) * ( pow(LdotN, 2.0f ) ) ) );
float G_GGXV = ( 2.0f * VdotN ) / ( VdotN + sqrt(a * a + ( 1.0f - a * a ) * ( pow(VdotN, 2.0f ) ) ) );
float G_GGX_SmithWalter = G_GGXV * G_GGXL; 

////////////////////////////////
/////GGX(SMITH-BECKMANN)////////
float G_GGXL = LdotN / ( a * sqrt(1.0f - ( pow(LdotN, 2.0f ) ) ) );
float G_GGXV = VdotN / ( a * sqrt(1.0f - ( pow(VdotN, 2.0f ) ) ) );
float G_GGX_SmithBeckmann = G_GGXV * G_GGXL; 

////////////////////////////////
/////GGX(SMITH-SCHLICK)/////////
float k = a * sqrt( 2.0f / pi );
float G_GGXL = LdotN / ( LdotN * ( 1.0f - k ) + k );
float G_GGXV = VdotN / ( VdotN * ( 1.0f - k ) + k );
float G_GGX_SmithSchlick = G_GGXV * G_GGXL; 

////////////////////////////////
///////////FRESNEL//////////////
//////////SHLICK////////////////
float F0 = FRESNEL; //Reflectance at normal incidence
float F_Schlick = F0 + ( 1.0f - F0) * pow(1.0f - VdotH, 5.0f );

////////////////////////////////
/////////COOK-TORRANCE//////////
float Eta = ( 1.0f + sqrt(F0 ) ) / ( 1.0f - sqrt(F0 ) );
float c = VdotH;
float g = sqrt(pow(Eta, 2.0f ) + pow(c, 2.0f ) - 1.0f );
float F = 0.5f * (pow( ( g - c ) / ( g + c ) , 2.0f ) * ( 1.0f + (pow( ( ( g + c ) * c - 1.0f ) / ( ( g - c ) * c + 1.0f ), 2.0f ) ) ) );

HDR Image assemble

We have several LDR with different exposure and we need to make a one HDR image.

Let’s nubmer of this LDR images will be
$$
j=1..n
$$
The exposure time of image j
$$
\Delta t_j
$$

Let response of a point on the sensor element will be the exposure X. We will base on the principle of reversibility, which is a physical property electronic imaging systems, based this exposure can be defined:  $$ E * \Delta t \\ E \text{- Illumination,}\: \Delta T \text{- time} $$

Unfortunatelly pixel value in photo not equal X. Lets make:
$$
\text{pixel value}\: i,\: \text{in image} j = Z_{ij}
$$
then
$$ Z_{ij} = f(X) = f(E_i\Delta t_j)
$$
f – camera response function (0) – converts the exposure to the pixel value

We can find this function or it can be assumed to match the sRGB standard, gamma correction curve with
$$
\gamma = 2.2
$$
In this case (where we now this function)
$$
f^-1(Z_{ij}) = E_i\Delta t_j
$$
then
$$
E_j = \frac{f^-1(Z_{ij})}{\Delta t}
$$
At this moment we have a problem, it’s impossible to restore luminance using one image, because we will lose information in underexposed and overexposed pixels.
In other words we need to take information from all images with different weighting.
$$
w(Z)\: -\: \text{function used to attenuate the contribution of poorly exposed pixels}
$$
$$
E_i = \frac{\sum\limits_{j=1}^n(\frac{w(Z_{ij}f^-1(Z_{ij}}{\Delta t})}{\sum\limits_{j=1}^nw(Z_{ij})}
$$

We assume that the LDR images are captured in sRGB so we can use the Luminance standard computation wich I describe earlier(1)
Also
$$
Z_{ij} \in [0;1] $$

float weight1 ( float lum) 
{
float res = 1.0 - pow((2.0 * lum - 1.0), 12.0);
return res;
}

Edit UVs Grid in Maya

We have a Grid tool in UV Editor, for repositions any currently selected UV to its nearest grid intersection in UV texture space.

GridUV

But the problem is that we have MAXIMUM in Grid U and Grid V number of grid lines set to 1024.

1024_gridUV

But what if we need or have more high res texture? Something like 2048 or even 4096?

You can download performPolyGridUV4096, just unzip to the root Maya folder (2014,2015 certainly works).

Now you will have an ability to set up to 4096.

4096

Prefiltering Cubemaps

1. Create PMREM(Prefiltered Mipmaped Radiance Environment map).
a) Get the hdr-map, as an example pisa.hdr(0) (bunch of here (1)).

pisa_latlong
b) Download HDR Shop (2) free edition.
c) Open HDR Shop.

  • open your .hdr – go to Image -> Panorama -> Panoramic Transform

Image

  • settings Source : Longitude,  Dest : Cube Env(Vertical Cross) – Convert

Transform

  • Save as .. *.HDR -> pisa_cross.hdr

d) Download ModifiedCubeMapGen-1_66 (3), many thanks to Sebastien Lagarde (4).

  • Load Cube Cross (choose pisa_cross.hdr)
  • Click on Filter Cubemap(to recieve better IBL resutl try to change settings)

Check

  •  Click CHECK on Save MipChain and SaveCubeMap(.dds) – pisa_cross.dds

2. Generate Sh-Coeff.
a) Upload to engine our PMREM(pisa_cross.dds)
b) Calculate coefficients of spherical harmonics from cube texture

Example with functions:restore lighting value from sh-coeff, for direction,  add and scale.

void sphericalHarmonicsEvaluateDirection(float * result, int order,
const Math::Vector3 & dir)
{
result[0] = 0.282095;
result[1] = 0.488603 * dir.y;
result[2] = 0.488603 * dir.z;
result[3] = 0.488603 * dir.x;
result[4] = 1.092548 * dir.x*dir.y;
result[5] = 1.092548 * dir.y*dir.z;
result[6] = 0.315392 * (3.f*dir.z*dir.z - 1.f);
result[7] = 1.092548 * dir.x * dir.z;
result[8] = 0.546274 * (dir.x*dir.x - dir.y*dir.y);
}

void sphericalHarmonicsAdd(float * result, int order,
const float * inputA, const float * inputB)
{
const int numCoeff = order * order;
for (int i = 0; i < numCoeff; i++)
{
result[i] = inputA[i] + inputB[i];
}
}

void sphericalHarmonicsScale(float * result, int order,
const float * input, float scale)
{
const int numCoeff = order * order;
for (int i = 0; i < numCoeff; i++)
{
result[i] = input[i] * scale;
}
}

void sphericalHarmonicsFromTexture(GLuint cubeTexture,
std::vector<Math::Vector3> & output, const uint order)
{
const uint sqOrder = order*order;

// allocate memory for calculations
output.resize(sqOrder);
std::vector<float> resultR(sqOrder);
std::vector<float> resultG(sqOrder);
std::vector<float> resultB(sqOrder);

// variables that describe current face of cube texture
GLubyte* data;
GLint width, height;
GLint internalFormat;
GLint numComponents;

// initialize values
float fWt = 0.0f;
for (uint i = 0; i < sqOrder; i++)
{
output[i].x = 0;
output[i].y = 0;
output[i].z = 0;
resultR[i] = 0;
resultG[i] = 0;
resultB[i] = 0;
}
std::vector<float> shBuff(sqOrder);
std::vector<float> shBuffB(sqOrder);

const GLenum cubeSides[6] = {
GL_TEXTURE_CUBE_MAP_POSITIVE_Y, // Top
GL_TEXTURE_CUBE_MAP_NEGATIVE_X, // Left
GL_TEXTURE_CUBE_MAP_POSITIVE_Z, // Front
GL_TEXTURE_CUBE_MAP_POSITIVE_X, // Right
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, // Back
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y // Bottom
};

// bind current texture
glBindTexture(GL_TEXTURE_CUBE_MAP, cubeTexture);

int level = 0;
// for each face of cube texture
for (int face = 0; face < 6; face++)
{
// get width and height
glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_HEIGHT, &height);

if (width != height)
{
return;
}

// get format of data in texture

glGetTexLevelParameteriv(cubeSides[face], level,
GL_TEXTURE_INTERNAL_FORMAT, &internalFormat);

// get data from texture
if (internalFormat == GL_RGBA)
{
numComponents = 4;
data = new GLubyte[numComponents * width * width];
}
else if (internalFormat == GL_RGB)
{
numComponents = 3;
data = new GLubyte[numComponents * width * width];
}
else
{
return;
}
glGetTexImage(cubeSides[face], level, internalFormat, GL_UNSIGNED_BYTE, data);

// step between two texels for range [0, 1]
float invWidth = 1.0f / float(width);
// initial negative bound for range [-1, 1]
float negativeBound = -1.0f + invWidth;
// step between two texels for range [-1, 1]
float invWidthBy2 = 2.0f / float(width);

for (int y = 0; y < width; y++)
{
// texture coordinate V in range [-1 to 1]
const float fV = negativeBound + float(y) * invWidthBy2;

for (int x = 0; x < width; x++)
{
// texture coordinate U in range [-1 to 1]
const float fU = negativeBound + float(x) * invWidthBy2;

// determine direction from center of cube texture to current texel
Math::Vector3 dir;
switch (cubeSides[face])
{
case GL_TEXTURE_CUBE_MAP_POSITIVE_X:
dir.x = 1.0f;
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = 1.0f - (invWidthBy2 * float(x) + invWidth);
//dir = -dir;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_X:
dir.x = -1.0f;
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = -1.0f + (invWidthBy2 * float(x) + invWidth);
//dir = dir;
break;
case GL_TEXTURE_CUBE_MAP_POSITIVE_Y:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f;
dir.z = -1.0f + (invWidthBy2 * float(y) + invWidth);
//dir = dir;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_Y:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = -1.0f;
dir.z = 1.0f - (invWidthBy2 * float(y) + invWidth);
//dir = dir; //!
break;
case GL_TEXTURE_CUBE_MAP_POSITIVE_Z:
dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = 1.0f;
break;
case GL_TEXTURE_CUBE_MAP_NEGATIVE_Z:
dir.x = 1.0f - (invWidthBy2 * float(x) + invWidth);
dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth);
dir.z = -1.0f;
break;
default:
return;
}

// normalize direction
dir = Math::Normalize(dir);
// scale factor depending on distance from center of the face
const float fDiffSolid = 4.0f / ((1.0f + fU*fU + fV*fV) *
sqrtf(1.0f + fU*fU + fV*fV));
fWt += fDiffSolid;

// calculate coefficients of spherical harmonics for current direction
sphericalHarmonicsEvaluateDirection(shBuff.data(), order, dir);

// index of texel in texture
uint pixOffsetIndex = (x + y * width) * numComponents;
// get color from texture and map to range [0, 1]
Math::Vector3 clr(
float(data[pixOffsetIndex]) / 255,
float(data[pixOffsetIndex + 1]) / 255,
float(data[pixOffsetIndex + 2]) / 255
);

//clr.x = pow(clr.x, 1.1f);
//clr.y = pow(clr.y, 1.1f);
//clr.z = pow(clr.z, 1.1f);

clr.x = clr.x*0.5f;
clr.y = clr.y*0.5f;
clr.z = clr.z*0.5f;

// scale color and add to previously accumulated coefficients
sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.x * fDiffSolid);
sphericalHarmonicsAdd(resultR.data(), order,
resultR.data(), shBuffB.data());

sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.y * fDiffSolid);
sphericalHarmonicsAdd(resultG.data(), order,
resultG.data(), shBuffB.data());

sphericalHarmonicsScale(shBuffB.data(), order,
shBuff.data(), clr.z * fDiffSolid);
sphericalHarmonicsAdd(resultB.data(), order,
resultB.data(), shBuffB.data());
}
}

delete[] data;
}

// final scale for coefficients
const float fNormProj = (4.0f * Math::PI<float>()) / fWt;
sphericalHarmonicsScale(resultR.data(), order, resultR.data(), fNormProj);
sphericalHarmonicsScale(resultG.data(), order, resultG.data(), fNormProj);
sphericalHarmonicsScale(resultB.data(), order, resultB.data(), fNormProj);

// save result
for (uint i = 0; i < sqOrder; i++)
{
output[i].x = resultR[i];
output[i].y = resultG[i];
output[i].z = resultB[i];
}

//glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}

Thanks to Reev(5) from gamedev.ru

Example in-game SH-coeff shader:

const float PhongRandsConsts[32] =
{
0,
188,
137,
225,
99,
207,
165,
241,
71,
198,
151,
233,
120,
216,
177,
248,
50,
193,
145,
229,
110,
212,
171,
244,
86,
203,
158,
237,
129,
220,
182,
252
};

inline float tosRGBFloat(float rgba)
{
float srgb = (rgba*rgba)*(rgba*0.2848f + 0.7152f);
return srgb;
}

Math::Vector4* GetPhongRands()
{
static Math::Vector4 rands[32];
float r1 = 0.032f;

for (int it = 0, end = 32; it != end; ++it)
{
float r2 = tosRGBFloat(PhongRandsConsts[it] / 255.f);

//float r2 = Platform::Random::RandFloat(0.f, 1.0f);

rands[it].x = r1;
rands[it].y = r2;
rands[it].z = Math::Cos(2.f * Math::PI<float>() * r2);
rands[it].w = Math::Sin(2.f * Math::PI<float>() * r2);

r1 += 0.032f;
}

return rands;
}

Aberration

In real life we have a couple of abberations, such as Optical aberration, Aberattion of light, Relativistic aberration(0).
Aberration of light, also known as astranomical-stellar aberration relates to space objects,  which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. So we defenitely doesn’t need this in classic shooter game. Relativistic aberration refers to accepted physical regarding the relationship between space and time, wich refers to Einstein’s special theory of relativity(1). It’s very interesting btw…

Optical abberation also have several variations but we will implement the simple Chromatic aberration(2).

chromatic_

In the real world it occurs when a cheap lens is used with a short focal point. Light of different wavelengths focus at different distances making it hard to get a fully sharp image, and you end up with color bleeding.

aberration1

In games it usually appears in action moments. We have 3 chanells, one of them will be the same and two will have a small offset wich we can control.

Texture2D t_frame; // : register( t0 );

static const float aberration_r = aberration_parameters.x; //control amount of red color channel
static const float aberration_b = aberration_parameters.y; //control amount of blue color channel

float screen; //screen resolution
float2 uv; // uv coord
float3 fColor; // color or the frame
float2 uv_offset = uv - 0.5f; // main offset

float2 uv_offset_left = uv_offset * ( 1.0 + aberration_red * screen.z ) + 0.5;
float2 uv_offset_right = uv_offset * ( 1.0 - aberration_blue * screens.w ) + 0.5;

frame = float3(
t_frame.Sample( sampler, uv_offset_left, 0 ).r,
t_frame.Sample( sampler, uv, 0 ).g,
t_frame.Sampl( sampler, uv_offset_right, 0 ).b
);

Shadow problem

In shadow result with IBL lightning. Looks pretty dull, perhaps in life this will look as it is, but the game we would like to focus on weapon details even in shadow.

Shadow1

As an idea to create a special light source that will affect only specular and  use it in the shade.

Shadow2

diffuse_light = light_accum.diffuse * light_diffuse_factor; //diffuse influence factor
specular_light = light_accum.specular * light_specularfactor; //specular influence factor

 

factors

It’s simple to create and easy to use.

The Cursed Island: Mask of Baragus

ci-icon

Uncover the mystery of the disappearance in this incredible adventure. Explore the ancient city that is full of secrets and wonders. A captivating journey awaits you!

 

Game Designer, Scripter, Project Manager, Animator

Light Attenuation

Inverse square law (0)
inverse-square-law

Inverse square law: the arrows represent the flux emanating from the source. The density of arrow per unit area (green square, density flux), is decreasing with squared distance.

$$
Intensity = \frac{1}{{distance}^2}
$$

The apparent intensity of the light from the source of constant luminosity decreases with the square of the distance from the object.

The main problem of this physically correct attenuation is:

  • It takes forever to reach 0. Need to cull it somehow, in Unity this type gets culled at 1/256.
  • We need a lot of time LARGE lights, (x < 1 above)
  • In deffered lightning a lot of pixels getting processed for very little gain

Some variants with two good for us, RED and GREEN. PURPLE color curve have a huge falloff.

curves

One approach to solve this is to window the falloff in such a way that most of the function is unaffected (1). For this we will use a basic linear interpolation to zero based on distance criteria (2). I use also photometric terms (PBR!!):

$$
E= lerp(\frac{I}{{distance^2}}, 0, \frac{distance}{lightRadius})=(\frac{I}{distance^2})(1 – \frac{distance}{lightRadius})
$$

go to

$$
E_{v1}= (\frac{I}{distance^2})saturate(1 – (\frac{x^n}{lightRadius}))^2 – Frostbite 3
$$

  • n – tweak the transition smoothness (Frostbite = 4.0) from (1)

$$
E_{v2}= \frac{saturate(1 – (distance/lightRadius)^4)^2}{distance^2 + 1} – Unreal 4
$$

In real life there is no lightRadius but in games we may use it to prevent ingress of a large number of objects in the scene in the range of light (of course and their calculation) by reducing the radius.

Falloff_1 Falloff_2

The second one have the most acceptable variant with inverse square falloff we achieve more natural results, the first one is old style falloff.

Falloff_4 Falloff_5

Near light behaves more naturally, but still strong light on the side walls of the corridor, and now see the down variant.

Falloff_6