Awesome stuff for get random noise! Source code in GLSL.
Author Archives: admin
Prefiltering Cubemaps
1. Create PMREM(Prefiltered Mipmaped Radiance Environment map).
a) Get the hdr-map, as an example pisa.hdr(0) (bunch of here (1)).
b) Download HDR Shop (2) free edition.
c) Open HDR Shop.
- open your .hdr – go to Image -> Panorama -> Panoramic Transform
- settings Source : Longitude, Dest : Cube Env(Vertical Cross) – Convert
- Save as .. *.HDR -> pisa_cross.hdr
d) Download ModifiedCubeMapGen-1_66 (3), many thanks to Sebastien Lagarde (4).
- Load Cube Cross (choose pisa_cross.hdr)
- Click on Filter Cubemap(to recieve better IBL resutl try to change settings)
- Click CHECK on Save MipChain and SaveCubeMap(.dds) – pisa_cross.dds
2. Generate Sh-Coeff.
a) Upload to engine our PMREM(pisa_cross.dds)
b) Calculate coefficients of spherical harmonics from cube texture
Example with functions:restore lighting value from sh-coeff, for direction, add and scale.
void sphericalHarmonicsEvaluateDirection(float * result, int order, const Math::Vector3 & dir) { result[0] = 0.282095; result[1] = 0.488603 * dir.y; result[2] = 0.488603 * dir.z; result[3] = 0.488603 * dir.x; result[4] = 1.092548 * dir.x*dir.y; result[5] = 1.092548 * dir.y*dir.z; result[6] = 0.315392 * (3.f*dir.z*dir.z - 1.f); result[7] = 1.092548 * dir.x * dir.z; result[8] = 0.546274 * (dir.x*dir.x - dir.y*dir.y); } void sphericalHarmonicsAdd(float * result, int order, const float * inputA, const float * inputB) { const int numCoeff = order * order; for (int i = 0; i < numCoeff; i++) { result[i] = inputA[i] + inputB[i]; } } void sphericalHarmonicsScale(float * result, int order, const float * input, float scale) { const int numCoeff = order * order; for (int i = 0; i < numCoeff; i++) { result[i] = input[i] * scale; } } void sphericalHarmonicsFromTexture(GLuint cubeTexture, std::vector<Math::Vector3> & output, const uint order) { const uint sqOrder = order*order; // allocate memory for calculations output.resize(sqOrder); std::vector<float> resultR(sqOrder); std::vector<float> resultG(sqOrder); std::vector<float> resultB(sqOrder); // variables that describe current face of cube texture GLubyte* data; GLint width, height; GLint internalFormat; GLint numComponents; // initialize values float fWt = 0.0f; for (uint i = 0; i < sqOrder; i++) { output[i].x = 0; output[i].y = 0; output[i].z = 0; resultR[i] = 0; resultG[i] = 0; resultB[i] = 0; } std::vector<float> shBuff(sqOrder); std::vector<float> shBuffB(sqOrder); const GLenum cubeSides[6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_Y, // Top GL_TEXTURE_CUBE_MAP_NEGATIVE_X, // Left GL_TEXTURE_CUBE_MAP_POSITIVE_Z, // Front GL_TEXTURE_CUBE_MAP_POSITIVE_X, // Right GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, // Back GL_TEXTURE_CUBE_MAP_NEGATIVE_Y // Bottom }; // bind current texture glBindTexture(GL_TEXTURE_CUBE_MAP, cubeTexture); int level = 0; // for each face of cube texture for (int face = 0; face < 6; face++) { // get width and height glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_WIDTH, &width); glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_HEIGHT, &height); if (width != height) { return; } // get format of data in texture glGetTexLevelParameteriv(cubeSides[face], level, GL_TEXTURE_INTERNAL_FORMAT, &internalFormat); // get data from texture if (internalFormat == GL_RGBA) { numComponents = 4; data = new GLubyte[numComponents * width * width]; } else if (internalFormat == GL_RGB) { numComponents = 3; data = new GLubyte[numComponents * width * width]; } else { return; } glGetTexImage(cubeSides[face], level, internalFormat, GL_UNSIGNED_BYTE, data); // step between two texels for range [0, 1] float invWidth = 1.0f / float(width); // initial negative bound for range [-1, 1] float negativeBound = -1.0f + invWidth; // step between two texels for range [-1, 1] float invWidthBy2 = 2.0f / float(width); for (int y = 0; y < width; y++) { // texture coordinate V in range [-1 to 1] const float fV = negativeBound + float(y) * invWidthBy2; for (int x = 0; x < width; x++) { // texture coordinate U in range [-1 to 1] const float fU = negativeBound + float(x) * invWidthBy2; // determine direction from center of cube texture to current texel Math::Vector3 dir; switch (cubeSides[face]) { case GL_TEXTURE_CUBE_MAP_POSITIVE_X: dir.x = 1.0f; dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth); dir.z = 1.0f - (invWidthBy2 * float(x) + invWidth); //dir = -dir; break; case GL_TEXTURE_CUBE_MAP_NEGATIVE_X: dir.x = -1.0f; dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth); dir.z = -1.0f + (invWidthBy2 * float(x) + invWidth); //dir = dir; break; case GL_TEXTURE_CUBE_MAP_POSITIVE_Y: dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth); dir.y = 1.0f; dir.z = -1.0f + (invWidthBy2 * float(y) + invWidth); //dir = dir; break; case GL_TEXTURE_CUBE_MAP_NEGATIVE_Y: dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth); dir.y = -1.0f; dir.z = 1.0f - (invWidthBy2 * float(y) + invWidth); //dir = dir; //! break; case GL_TEXTURE_CUBE_MAP_POSITIVE_Z: dir.x = -1.0f + (invWidthBy2 * float(x) + invWidth); dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth); dir.z = 1.0f; break; case GL_TEXTURE_CUBE_MAP_NEGATIVE_Z: dir.x = 1.0f - (invWidthBy2 * float(x) + invWidth); dir.y = 1.0f - (invWidthBy2 * float(y) + invWidth); dir.z = -1.0f; break; default: return; } // normalize direction dir = Math::Normalize(dir); // scale factor depending on distance from center of the face const float fDiffSolid = 4.0f / ((1.0f + fU*fU + fV*fV) * sqrtf(1.0f + fU*fU + fV*fV)); fWt += fDiffSolid; // calculate coefficients of spherical harmonics for current direction sphericalHarmonicsEvaluateDirection(shBuff.data(), order, dir); // index of texel in texture uint pixOffsetIndex = (x + y * width) * numComponents; // get color from texture and map to range [0, 1] Math::Vector3 clr( float(data[pixOffsetIndex]) / 255, float(data[pixOffsetIndex + 1]) / 255, float(data[pixOffsetIndex + 2]) / 255 ); //clr.x = pow(clr.x, 1.1f); //clr.y = pow(clr.y, 1.1f); //clr.z = pow(clr.z, 1.1f); clr.x = clr.x*0.5f; clr.y = clr.y*0.5f; clr.z = clr.z*0.5f; // scale color and add to previously accumulated coefficients sphericalHarmonicsScale(shBuffB.data(), order, shBuff.data(), clr.x * fDiffSolid); sphericalHarmonicsAdd(resultR.data(), order, resultR.data(), shBuffB.data()); sphericalHarmonicsScale(shBuffB.data(), order, shBuff.data(), clr.y * fDiffSolid); sphericalHarmonicsAdd(resultG.data(), order, resultG.data(), shBuffB.data()); sphericalHarmonicsScale(shBuffB.data(), order, shBuff.data(), clr.z * fDiffSolid); sphericalHarmonicsAdd(resultB.data(), order, resultB.data(), shBuffB.data()); } } delete[] data; } // final scale for coefficients const float fNormProj = (4.0f * Math::PI<float>()) / fWt; sphericalHarmonicsScale(resultR.data(), order, resultR.data(), fNormProj); sphericalHarmonicsScale(resultG.data(), order, resultG.data(), fNormProj); sphericalHarmonicsScale(resultB.data(), order, resultB.data(), fNormProj); // save result for (uint i = 0; i < sqOrder; i++) { output[i].x = resultR[i]; output[i].y = resultG[i]; output[i].z = resultB[i]; } //glBindTexture(GL_TEXTURE_CUBE_MAP, 0); }
Thanks to Reev(5) from gamedev.ru
Example in-game SH-coeff shader:
const float PhongRandsConsts[32] = { 0, 188, 137, 225, 99, 207, 165, 241, 71, 198, 151, 233, 120, 216, 177, 248, 50, 193, 145, 229, 110, 212, 171, 244, 86, 203, 158, 237, 129, 220, 182, 252 }; inline float tosRGBFloat(float rgba) { float srgb = (rgba*rgba)*(rgba*0.2848f + 0.7152f); return srgb; } Math::Vector4* GetPhongRands() { static Math::Vector4 rands[32]; float r1 = 0.032f; for (int it = 0, end = 32; it != end; ++it) { float r2 = tosRGBFloat(PhongRandsConsts[it] / 255.f); //float r2 = Platform::Random::RandFloat(0.f, 1.0f); rands[it].x = r1; rands[it].y = r2; rands[it].z = Math::Cos(2.f * Math::PI<float>() * r2); rands[it].w = Math::Sin(2.f * Math::PI<float>() * r2); r1 += 0.032f; } return rands; }
Aberration
In real life we have a couple of abberations, such as Optical aberration, Aberattion of light, Relativistic aberration(0).
Aberration of light, also known as astranomical-stellar aberration relates to space objects, which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. So we defenitely doesn’t need this in classic shooter game. Relativistic aberration refers to accepted physical regarding the relationship between space and time, wich refers to Einstein’s special theory of relativity(1). It’s very interesting btw…
Optical abberation also have several variations but we will implement the simple Chromatic aberration(2).
In the real world it occurs when a cheap lens is used with a short focal point. Light of different wavelengths focus at different distances making it hard to get a fully sharp image, and you end up with color bleeding.
In games it usually appears in action moments. We have 3 chanells, one of them will be the same and two will have a small offset wich we can control.
Texture2D t_frame; // : register( t0 ); static const float aberration_r = aberration_parameters.x; //control amount of red color channel static const float aberration_b = aberration_parameters.y; //control amount of blue color channel float screen; //screen resolution float2 uv; // uv coord float3 fColor; // color or the frame float2 uv_offset = uv - 0.5f; // main offset float2 uv_offset_left = uv_offset * ( 1.0 + aberration_red * screen.z ) + 0.5; float2 uv_offset_right = uv_offset * ( 1.0 - aberration_blue * screens.w ) + 0.5; frame = float3( t_frame.Sample( sampler, uv_offset_left, 0 ).r, t_frame.Sample( sampler, uv, 0 ).g, t_frame.Sampl( sampler, uv_offset_right, 0 ).b );
Shadow problem
In shadow result with IBL lightning. Looks pretty dull, perhaps in life this will look as it is, but the game we would like to focus on weapon details even in shadow.
As an idea to create a special light source that will affect only specular and use it in the shade.
diffuse_light = light_accum.diffuse * light_diffuse_factor; //diffuse influence factor
specular_light = light_accum.specular * light_specularfactor; //specular influence factor
It’s simple to create and easy to use.
Detail texture
The Cursed Island: Mask of Baragus
Uncover the mystery of the disappearance in this incredible adventure. Explore the ancient city that is full of secrets and wonders. A captivating journey awaits you!
Game Designer, Scripter, Project Manager, Animator
Light Attenuation
Inverse square law (0)
Inverse square law: the arrows represent the flux emanating from the source. The density of arrow per unit area (green square, density flux), is decreasing with squared distance.
$$
Intensity = \frac{1}{{distance}^2}
$$
The apparent intensity of the light from the source of constant luminosity decreases with the square of the distance from the object.
The main problem of this physically correct attenuation is:
- It takes forever to reach 0. Need to cull it somehow, in Unity this type gets culled at 1/256.
- We need a lot of time LARGE lights, (x < 1 above)
- In deffered lightning a lot of pixels getting processed for very little gain
Some variants with two good for us, RED and GREEN. PURPLE color curve have a huge falloff.
One approach to solve this is to window the falloff in such a way that most of the function is unaffected (1). For this we will use a basic linear interpolation to zero based on distance criteria (2). I use also photometric terms (PBR!!):
$$
E= lerp(\frac{I}{{distance^2}}, 0, \frac{distance}{lightRadius})=(\frac{I}{distance^2})(1 – \frac{distance}{lightRadius})
$$
go to
$$
E_{v1}= (\frac{I}{distance^2})saturate(1 – (\frac{x^n}{lightRadius}))^2 – Frostbite 3
$$
- n – tweak the transition smoothness (Frostbite = 4.0) from (1)
$$
E_{v2}= \frac{saturate(1 – (distance/lightRadius)^4)^2}{distance^2 + 1} – Unreal 4
$$
In real life there is no lightRadius but in games we may use it to prevent ingress of a large number of objects in the scene in the range of light (of course and their calculation) by reducing the radius.
The second one have the most acceptable variant with inverse square falloff we achieve more natural results, the first one is old style falloff.
Near light behaves more naturally, but still strong light on the side walls of the corridor, and now see the down variant.
Rain Again
The task is simple – add some kind of detail (PBR) in the rain. So this we have from the start.
Add a special option for materials to get wet (wood, asphalt and other rough surfaces become darker when it’s raining)
Change the ground shader, underfoot the same bare ground.
Slate in the rain becomes wet and darker but obviously not shine as a metal.
float4 position_with_offset = float4( view_space_position + normal * 1.3f, 1.0f ); float4 projected_position = mul( view_to_shadow, position_with_offset ); float rain_mask = saturate( read_rain_mask( projected_position ) ); // rain mask, where rain effect visible float roughness = gb.roughness; float fresnel = gb.fresnel; float metalness = saturate( ( dot( fresnel, 0.33 ) * 1000 - 500 ) ); float porosity = saturate( ( roughness - 0.5) / 0.1 ); float factor = lerp( 0.1, 1, metalness * porosity ); float3 to_eye = -view_space_position; diffuse_result *= lerp ( 0.85, ( factor * 5 ), rain_mask );
Add some details and dof. Perhaps that’s it.
Looks better and this is well.
In-game result with tone mapping.
Energy Conservation
Energy conservation is a restriction on the reflection model that requires that the total amount of reflected light cannot be more than the incoming light (0).
$$
\int_{\Omega} f(\vec{x}, \phi, \theta) L_i \cos \theta \,\delta\omega\leq L_i
$$
- f – BRDF(any?)
We have to calculate energy conservation for Diffuse and Specular and than combined model need to fit this:
$$ C_d C_s \leq 1 $$
This means that if you want to make material with more specular, you may have to reduce the diffuse.
Left: A sphere with a high diffuse intensity, and low specular intensity. Right: High specular intensity, low diffuse. Middle: Maximum diffuse AND specular intensity. Note how it looks blown out and too bright for the scene (1)
The upper part of the table shows the normalized reflection density function (RDF). This is the probability density that a photon from the incoming direction is reflected to the outgoing direction, and is the BRDF times . Here, is the angle between and , which is, for the assumed view position, also the angle between and , resp. and .
The lower part of the table shows the normalized normal distribution function (NDF) for a micro-facet model. This is the probability density that the normal of a micro-facet is oriented towards . It is the same expression in spherical coordinates than that for of the Phong RDF, just over a different variable, , the angle between and . The heightfield distribution does it slightly different, it normalizes the projected area of the micro-facets to the area of the ground plane (adding yet another cosine term). (3)
With and without energy conservation /π (4)
Beware to overdo the result with minimal maximum.
Different decals depends on smoothmap
When it’s raining we need to have a different drops depends on material. To do this we can use the smoothmap to control surface, more smoother surface we have, more differen drops we will have.
This is the result without smoothmap using. We see the same type drops everywhere.
Here we set the smoothmap to to determine where surface is rough and where no. Also we need to make a dual texture.
And here is the finished result, now we can see that on the asphalt we have own drops and on the puddle own.