Tile frustum calculation for the point light culling

Tile-based deferred shading throught the DirectX Compute Shader for achieving the high performance with many lights by amortizing the light culling overhead over screen tiles as well as grouping lights to avoid wasting memory bandwidth.

As you know, we need to calculate the frustum of each tile for our tile-based rendering – (0) Intel of sample “Deferred Rendering for Current and Future Rendering Pipelines”

#define COMPUTE_SHADER_TILE_GROUP_DIM 10
// Work out scale/bias from [0, 1]
float2 tileScale = float2(mFramebufferDimensions.xy) * rcp(float(2 * COMPUTE_SHADER_TILE_GROUP_DIM));
float2 tileBias = tileScale - float2(groupId.xy);

// Now work out composite projection matrix
// Relevant matrix columns for this tile frusta
float4 c1 = float4(mCameraProj._11 * tileScale.x, 0.0f, tileBias.x, 0.0f);
float4 c2 = float4(0.0f, -mCameraProj._22 * tileScale.y, tileBias.y, 0.0f);
float4 c4 = float4(0.0f, 0.0f, 1.0f, 0.0f);

// Derive frustum planes
float4 frustumPlanes[6];
// Sides
frustumPlanes[0] = c4 - c1; // (-X direction)
frustumPlanes[1] = c1;      // (+ X direction).
frustumPlanes[2] = c4 - c2; // (-Y direction).
frustumPlanes[3] = c2;      // (+ Y direction)
// Near/far
frustumPlanes[4] = float4(0.0f, 0.0f, 1.0f, -minTileZ);
frustumPlanes[5] = float4(0.0f, 0.0f, -1.0f, maxTileZ);

// Normalize frustum planes (near/far already normalized)
[unroll] for (uint i = 0; i < 4; ++i) {
frustumPlanes[i] *= rcp(length(frustumPlanes[i].xyz));
}

In the above code, we have to calculate the normal of 6 planes that constitute the frustum.
The calculation is performed with a view space, but the normal line of the calculation is considered in the tile, not the distance unit.

Meaning of the variables:

mFramebufferDimensions = length and width of the frame buffer size (pixels)
tileScale = the center as (0, 0), one tile = 1.0 to become such a maximum value of the tile coordinate system
tileBias = coordinates of the tile you are dealing now (beginning from +tileScale, ending in -tileScale)

In the following description mFramebufferDimensions = (600,600), and the COMPUTE_SHADER_TILE_GROUP_DIM = 10.
Since the tileScale = 600 / (2 * 10) = 30, the minimum -30.0 and range to a maximum of 30.0, will coordinate system:

time

Now lets try to calculate the normal line (+ Y direction) of the frustum.
float4 c2 = float4 to (0.0f, tileBias.y, mCameraProj._22 * tileScale.y, 0.0f)  (C2 tile coordinate system in the view space).

The value of mCameraProj._22 is, row 2, column 2 of the projection matrix = (1.0f / tan (fovY / 2))

First, consider the case of groupID = 0.

group-id-0

Rotating c2 90 degrees, since the normal direction of the plane of the + Y side of the frustum is understood to be a c2. We have the expression of the following Intel’s sample.
float4 c2 ‘= float4 (0.0f, -mCameraProj._22 * tileScale.y, tileBias.y, 0.0f);

group-id-0-90

Similarly, consider the case of groupID = 1 is next to the tile. (In this case, tileBias = tileScale – 20)

group-id-1

Rotating c2 90 degrees, since the normal direction of the plane of the + Y side of the frustum is understood to be a c2. We have the expression of the following Intel’s sample.
float4 c2 ‘= float4 (0.0f, -mCameraProj._22 * tileScale.y, tileBias.y, 0.0f);

group-id-1-90

Try to calculate the normal line of the lower side of the frustum (-Y direction).
float4 c2′ = float4 (0.0f, -mCameraProj._22 * tileScale.y, tileBias.y, 0.0f);
float4 c4 = float4 (0.0f, 0.0f, 1.0f, 0.0f);
(C2′ and C4 tile coordinate system in the view space)

Then, the normal direction of the lower side of the frustum (-Y direction), c4 – can be represented by c2′.

Consider the case of groupID = 0.

group-id-0-c2

Repeat this for each coordinate of the point light with collision detection of the frustum.

// Cull lights for this tile
for (uint lightIndex = groupIndex; lightIndex < numPointLights; lightIndex += COMPUTE_SHADER_TILE_GROUP_SIZE) {

float3 lightPosition = mul(float4(PointLightDataArray[lightIndex].pos.xyz, 1.0f), matView).xyz;
float cutoffRadius = PointLightDataArray[lightIndex].col_radius.w;
// Cull: point light sphere vs tile frustum
bool inFrustum = true;
[unroll] for (uint i = 0; i < 6; ++i) {
float d = dot(frustumPlanes[i], float4(lightPosition, 1.0f));
inFrustum = inFrustum && (d >= -cutoffRadius);
}

[branch] if (inFrustum) {
// Append light to list
// Compaction might be better if we expect a lot of lights
uint listIndex;
InterlockedAdd(sTileNumLights, 1, listIndex);
sTileLightIndices[listIndex] = lightIndex;
}
}

Visual course

Starting my new course – special for 2D/3D artist, ta and can be interesting to some programmers with this topics:

  • Color – lets talk about color
    • RGB and other models – how the rgb was found
    • Gamma correction – what is gamma correction today, white balance etc.
    • Linear space – what is linear space and why we use it
    • LDR/HDR – what is HDR and LDR what the difference
    • Tonemapping – how to use and why we need to have tonemapping
  • Forward vs Deferred
    • Deferred is everywhere – why everybody switch to deferred
    • Problems – what the problems we have with deferred
    • Combination – how we can use both of them and why do this
    • New approach – Screen space effects
  • PBR workflows – why use PBR
    • Shading models – lambert, burley, etc..
    • Specular vs metallic – what the difference
    • Albedo, glossines, roughes, specular, ao, cavity – different maps
    • Metals and dielectrics – how to choose
    • Lighting models – what models are
  • Lighting – for PBR
    • Brdfs – what is BRDF
    • Different BRDF/BTDF models – what the difference
    • Energy conservation – why it’s important
    • Fresnel – what is Fresnel term and how it affect metallic
    • How IBL fits here – when we use IBL
    • Area lights – what the difference between area and directional
  • Image base lighting – what is IBL
    • IBL in games – how and why we use it
    • Irradiance environment map (IEM) – approaches
    • Spherical harmonics – approaches
    • Volumetric lighting
  • PBR Camera – final image post-processing
    • Camera – why is important (LUV,SHUTTER SPEED OTHER)
    • High values – color space and dynamic range
    • Antialiasing
    • Optical effects – Optic, Lenses, Bokeh, Glare, Anamorphic
    • Motion blur, DOF
  • Custom Lighting – complicated
    • Photon lighting
    • Crytek LPV
    • Unreal voxel cone tracing
  • Ray Tracing and Distance functions
    • Raymarch – what it is and why we need to know
    • Approximations – how and why we should do it
    • Examples of approach – ray traced distance field soft shadows
  • Texture Compression
    • Memory, quality, power, performance – four horsemen of texture compression
    • DXT – S3 compression.
    • BCn – compression
    • Atlases – what is this and why we need to use them
    • UV space – from 0 to 1, mirror, wrap and others

The main purprose – to have a solid understanding what everybody talking about every day but dont know exactly what is it. This is not a final listing here, I’m still WIP and adding a little bit more topics to cover. All is a rather simple form, without any complicated formulas or calculations, the task of the course – learn the concepts with which we operate every day.

3D Fractals

The Mandelbulb is a three-dimensional fractal, constructed by Daniel White and Paul Nylander using spherical coordinates in 2009.

As we know, the canonical 3-dimensional Mandelbrot set doesn’t exist, since there is no 3-dimensional analogue of the 2-dimensional space of complex numbers. It is possible to construct Mandelbrot sets in 4 dimensions using quaternions. However, this set does not exhibit detail at all scales like the 2D Mandelbrot set does (0).

The Mandelbulb is then defined as the set of those {\mathbf c} in 3 for which the orbit of \langle 0, 0, 0\rangle under the iteration {\mathbf v} \mapsto {\mathbf v}^n+{\mathbf c} is bounded. For n > 3, the result is a 3-dimensional bulb-like structure with fractal surface detail and a number of “lobes” depending on n. Usually you can use n = 8 or as an example animate this POWER value.

The Sierpinski triangle is a fractal and attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. It is named after the Polish mathematician Wacław Sierpiński but appeared as a decorative pattern many centuries prior to the work of Sierpiński.

There are many different ways of constructing the Sierpinski triangle, this is a 3D way using raymarching.

And the last on is the Menger sponge (also known as the Menger universal curve) is a fractal curve. It is a three-dimensional generalization of the Cantor set and Sierpinski carpet. It was first described by Karl Menger in 1926, in his studies of the concept of topological dimension.

The Menger sponge simultaneously exhibits an infinite surface area and zero volume.

The construction of a Menger sponge can be described as follows:

  • Begin with a cube (first image).
  • Divide every face of the cube into 9 squares, like a Rubik’s Cube. This will sub-divide the cube into 27 smaller cubes.
  • Remove the smaller cube in the middle of each face, and remove the smaller cube in the very center of the larger cube, leaving 20 smaller cubes.
  • Repeat steps 2 and 3 for each of the remaining smaller cubes.