3D Fractals

The Mandelbulb is a three-dimensional fractal, constructed by Daniel White and Paul Nylander using spherical coordinates in 2009.

As we know, the canonical 3-dimensional Mandelbrot set doesn’t exist, since there is no 3-dimensional analogue of the 2-dimensional space of complex numbers. It is possible to construct Mandelbrot sets in 4 dimensions using quaternions. However, this set does not exhibit detail at all scales like the 2D Mandelbrot set does (0).

The Mandelbulb is then defined as the set of those {\mathbf c} in 3 for which the orbit of \langle 0, 0, 0\rangle under the iteration {\mathbf v} \mapsto {\mathbf v}^n+{\mathbf c} is bounded. For n > 3, the result is a 3-dimensional bulb-like structure with fractal surface detail and a number of “lobes” depending on n. Usually you can use n = 8 or as an example animate this POWER value.

The Sierpinski triangle is a fractal and attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. It is named after the Polish mathematician Wacław Sierpiński but appeared as a decorative pattern many centuries prior to the work of Sierpiński.

There are many different ways of constructing the Sierpinski triangle, this is a 3D way using raymarching.

And the last on is the Menger sponge (also known as the Menger universal curve) is a fractal curve. It is a three-dimensional generalization of the Cantor set and Sierpinski carpet. It was first described by Karl Menger in 1926, in his studies of the concept of topological dimension.

The Menger sponge simultaneously exhibits an infinite surface area and zero volume.

The construction of a Menger sponge can be described as follows:

  • Begin with a cube (first image).
  • Divide every face of the cube into 9 squares, like a Rubik’s Cube. This will sub-divide the cube into 27 smaller cubes.
  • Remove the smaller cube in the middle of each face, and remove the smaller cube in the very center of the larger cube, leaving 20 smaller cubes.
  • Repeat steps 2 and 3 for each of the remaining smaller cubes.


The energy leaving a surface on unit area and per unit time. Lighting calculation algorithm wich have the same word – “radiosity” comes basically from the law of conservation of energy and “Energy balance” hypothesis.


Let’s consider a simple example (of the 4 walls of the room in 2D). Suppose that one of the walls of a rejected the amount of energy (light) in a very short period of time (and here is extinguished). The law of conservation of energy says that all the energy that was radiated this wall will remain inside the room. For the emitted light, this means that it is all come to the other 3 wall. After this a some amount of a light can be absorbed, it depend on the surface type and other reflected, and one more time repeat, and repeat..  That mean, in a closed room (closed system), all once radiated energy goes into heat with the passage of time. Thus, the term “energy balance in the system” to have actually merely means that the level of illumination in the scene does not change with time. I.e we can ignore the time and consider the “instant” solution!

Radiosity algorithm works in terms of energy transferred between small areas(patches).

  1. Surfaces scenes are divided into a finite number of sites. Patches are small but finite size and is made up of several consecutive steps.
  2. The light sources are also interpreter as patches, radiating energy.
  3. For each pair of pathes are calculated so-called form factor 𝐹𝑖𝑗 – part of the energy transferred from the patch to the patch 𝑖𝑗, way we obtain the matrix. From the law of conservation of energy it follows that all the emitted
    energy of their energy patch i should come to any other patches. I.e amount of form factors in the row must be equal to 1.
  4. Once the form factors are known, it begins the process of solving linear radiosity equation. There is a calculation of what part of the actual energy on what grounds passes. There may be at least 2 ways – direct
    calculation and decision system of linear equations.
  5. Once the total emitted by each site is known, can be visualized lyse scene. As Lambert material radiates energy equally in all parties, in order to calculate the brightness of the areas should be divided the energy of each site on its area.

As the basic geometric element is proposed to use lined on the axes rectangles (Axis Aligned Quad). This primitive defines direct rectangle, located in 1 of 3 planes – XoY, XoZ or YoZ. It can be set as 𝑃 position and size in 2 directions perpendicular to the normal vector 𝑛 – 𝑛 rectangle at normal XoY can either be (0,0,1) or (0,0, -1).


With such a primitive enough to easily search-beam. Suppose you have a ray starting at point and the direction 𝑂 𝐷. And there is a plane YoZ. Then parametric coordinate 𝑡 point of intersection of the beam and the plane will be equal Tℎ𝑖𝑡 = (𝑃.𝑥 – 𝑂.𝑥) /𝐷.𝑥. In order to understand whether or not your beam is rectangular, you will check whether the found point is crossed (expressed as a straight line from the equation 𝑃ℎ𝑖𝑡 = 𝑂 + 𝐷 * Tℎ𝑖𝑡) within the boundaries of the rectangle.

The calculation of form factors

Differential Form Factor for the two sites and 𝑖 𝑗  is written as follows:

FdA_{i}dA_{j} = \frac{cos\theta_{i}cos\theta_{j}}{\pi r^2}


The relative positions of patches.

Complete the form factor for a given patch will be equal to the integral of the differential of the form factor on the area under consideration patch.

F_{ij} = \frac{1}{A_{i}}\int_{x\in p_{j}}\int_{y\in i_{j}}\frac{cos\theta_{i}cos\theta_{j}}{\pi r^2}V(x,d)dydx

Thus, we have made the transition from all points of the scene to the form factor between two specific sites. It remains to learn how to calculate the double integral over the surface. Very simple but long ray tracing and the Monte Carlo method. In the case of form factors should be N times to generate 2 random points on the floors, to calculate for each pair of points differential form factors and the results averaged. The result should be multiplied by the area of the two areas.


The integration of form factors using the Monte Carlo method and a trace rays

Form Factor propetries:

  1. The dependence on the geometry of the scene. Form factor depends solely on
    scene geometry, do not depend on the position of the light sources and their intensities
    intensity and position of the camera.
  2. The reversibility. Knowing the area ratio patches, you can switch from one form factor to another: 𝐹𝑖𝑗𝐴𝑖 = 𝐹𝑗𝑖𝐴𝑗
  3. Additivity. Knowing the form factor for a couple of patches i and j and the form factor for the pair
    patches i and the k, j, and by combining the k patches, we can calculate the form factor for the pair patcheyy and patch, which is a union of patch j and k. This property will be useful in the implementation of the hierarchical radiosity, when a group of distant sites can be considered as one big playground.
  4. The sum of all the possible form factor for each patch equal to the scene per unit. All the energy, leaving the patch must come to some other patch.

 Direct radiosity calculation

The algorithm is a direct calculation is quite simple. At each iteration, for each site It is going all the incoming energy. At the end of the iteration is multiplied by koefiftsient reflections and converted to outgoing energy for the next iteration.

struct PatchRadiosity // separateradiosity params from geometry
float3 emmision;
float3 reflectance;
float3 incident;
float3 excident;
float3 deposit ; // this is sum of all passes
// std :: vector <PatchRadiosity> patchesR(N_PATCHES); 
// init phase
for (PatchRadiosity&p : patchesR)
p.excident = p.emmision ;
p.emmision = float3 ( 0, 0, 0 );
p.deposit += p.excident;
// do radiosity
for ( int pass = 0; pass < a_numPasses; pass++)
// each patch collect slight from the scene
for ( int i = 0; i < patchesR.size(); j++)
patchesR[i].incident = float3 ( 0, 0, 0 );
for ( int j = 0; j < patchesR.size(); j++)
patchesR[i].incident += patchesR[j].excident * a_formFactors[i][j];
// deposit emitted energy for current bounce
for ( PatchRadiosity& patch : patchesR )
patch.excident = patch.incident * patch.reflectance;
patch.deposit += patch.excident ;

Example of using this algorithm

Random vectors directions within a sphere

Way to get the random vectors direction vec3 (x, y, z) within a sphere with a fixed radius 1.0?

Use three angles:
-180° <= a1 < +180° $$ $$
0° <= a2 <= 90° $$ $$
0° <= a3 <= 90°

1. Select a random direction in the XY plane: {cos (a 3), sin (a 3) 0}.
2. turn the Z axis to a random angle a2 around the random vector {cos (a1), sin (a1), 0}, which lies in the XY plane.

rotX(p) := matrix(
rotZ(p) := matrix(
dir(p) := matrix(
[ cos(p) ],
[ sin(p) ],
[ 0 ]
calc_dir(p1, p2, p3) := rotX(p1) . rotZ(p2) . dir(p3);

Current example you can try in Maxima Lisp.
The result of matrix multiplication:
vx = cos (p2) * cos (p3) -sin (p2) * sin (p3) $$ $$
vy = cos (p1) * (cos (p2) * sin (p3) + sin (p2) * cos (p3))$$ $$
vz = sin (p1) * (cos (p2) * sin (p3) + sin (p2) * cos (p3)) $$

And this is a generated image, as you you see no visible sphere poles, wich is great!


We can use it in the raytracing and gi.

Unity 5 Mobile Water Shader

Unity 5 Mobile Water Shader, as for me enough cheap with a good set of possible effects.

Shader "Water\MobileWater" {
_WaterContrast("Water Contrast", Range(0, 1)) = 0.3
_WaterSpeed("Water Speed", Range(0.01, 2)) = 0.4
_WaterDepth("WaterDepth", Range(0, 1)) = 0.3
_WaterTile("Water Tile", Range(1, 10)) = 2.5
_WaveNormal("Wave Normal", 2D) = "bump" {}
_SpecularGloss("Specular Gloss", Range(0.5, 20)) = 1.5
_SpecularPower("Specular Power", Range(0, 2)) = 0.5
_WaveWind("Wave Wind", Range(0, 1)) = 0.5
_WaveSpeed("Wave Speed", Range(0.01, 1)) = 0.4
_WaveHeight("Wave Height", Range(0, 50)) = 0.5
_WaveTile1("Wave Tile X", Range(10, 500)) = 400
_WaveTile2("Wave Tile Y", Range(10, 500)) = 400
_CoastColor("Coast Color", COLOR) = (0.17647, 0.76863, 0.82353, 1)
_CoastDepth("Coast Depth", COLOR) = (0, 1.0, 0.91373, 1)
_OceanColor("Ocean Color", COLOR) = (0.03529, 0.24314, 0.41961, 1)
_OceanDepth("Ocean Depth", COLOR) = (0.25098, 0.50980, 0.92549, 1)
_IslandMask("Island Mask", 2D) = "white" {}
_FoamDiffuse("Foam texture", 2D) = "white" {}
_FoamTileX("Foam Tile X", Range(0, 2.0)) = 0.5
_FoamTileY("Foam Tile Y", Range(0, 2.0)) = 0.5

Tags{ "LightMode" = "ForwardBase" }
Blend SrcAlpha OneMinusSrcAlpha
#include "UnityCG.cginc"
#pragma vertex vert 
#pragma fragment frag 
#pragma target 2.0

uniform sampler2D _FoamDiffuse,_IslandMask,_WaveNormal;
uniform float _WaveHeight,_WaterContrast,_WaveTile1,_WaveTile2,_WaterTile,_FoamTileX,_FoamTileY,_WaveWind,_WaterDepth,_WaveSpeed,_WaterSpeed,_SpecularGloss,_SpecularPower;
uniform float4 _Normal,_IslandFoam, _CoastColor, _CoastDepth, _OceanColor, _OceanDepth;

float4 costMaskF(float2 posUV){ return tex2Dlod(_IslandMask, float4(posUV,1.0,1.0));}

struct vertexOutput {
float4 pos : SV_POSITION;
float3 _Color : TEXCOORD0;
float3 _Depth : TEXCOORD1;
float4 _Normal : TEXCOORD2;
float4 _IslandFoam : TEXCOORD3;

vertexOutput vert(appdata_base a)
vertexOutput o;
float2 uv = a.texcoord.xy;
float4 pos = a.vertex;
// coast setup
float2 posUV = uv;
float4 coastMask = costMaskF(posUV); // mask for coast in blue channel
float animTimeX = uv.y *_WaveTile1 + _Time.w * _WaveSpeed; // add time for shore X
float animTimeY = uv.y *_WaveTile2 + _Time.w * _WaveSpeed; // add time for shore Y
float waveXCos = cos(animTimeX)+1;
float waveYCos = cos(animTimeY);
// coast waves
pos.z += (waveXCos * _WaveWind * coastMask) * coastMask;
pos.y += (waveYCos * _WaveHeight * _WaveWind * 0.25) * coastMask;
o.pos = mul(UNITY_MATRIX_MVP, pos);
// custom uv
float2 foamUV = float2(a.vertex.x *_FoamTileX, a.vertex.z *_FoamTileY);
float2 normalUV = float2(uv.x * 2.0, uv.y * 2.0) * 4.0;
// reflections
float3 lightPos = float3(-22.0, -180.0, -6.80);
float3 lightDir = float3(15.0, 1.0, 10.0);
float3 lightVec = normalize(lightPos - o.pos.xyz);
float lightRef = (1.0 - (dot(lightDir, lightVec)));
lightRef = lightRef * 0.25 + (lightRef * lightRef); // get rid of left side
// edge and depth water
float step = saturate(_WaterDepth);
float depthX = (a.vertex.x * 0.22 - 1.0); // centering depth area
float depthY = (a.vertex.z * 0.22 - 1.5); // centering depth area
float depth = pow((depthX * depthX + depthY * depthY) * 0.006,3);
float edge = saturate(step - (1.0 - depth) * 0.5);
// Vertex Custom Output
o._Color.rgb = lerp(_CoastColor.rgb, _OceanColor.rgb, edge);
o._Depth.rgb = lerp(_CoastDepth.rgb, _OceanDepth.rgb, edge);
o._IslandFoam.xy = posUV;
o._IslandFoam.zw = foamUV + float2(1-_Time.x, 1-_Time.x)*0.5;
o._Normal.xy = normalUV*_WaterTile + float2(0, _Time.x * _WaterSpeed); 
o._Normal.w = 1.0 - saturate(lightRef *(length((lightPos.x - (o.pos.z - 35.0))) * 0.002) * _SpecularGloss); // spec coeff
o._Normal.z = (sin((a.vertex.x - _Time.w) - (a.vertex.z + _Time.x) * 5) + 1.0) * 0.5; // normal coeff
return o;

float4 frag(vertexOutput i) : COLOR {
float4 normal = tex2D(_WaveNormal, i._Normal.xy);
float3 foam = float3(tex2D(_FoamDiffuse, float2(i._IslandFoam.z, i._IslandFoam.w - _Time.x)).r, 1.0, 1.0);
float3 mask = tex2D(_IslandMask, i._IslandFoam.xy).rgb*foam;
mask.g += _WaterContrast; // contrast point
float4 color = float4(lerp(i._Depth, i._Color, (normal.x * i._Normal.z) + (normal.y * (1.0 - i._Normal.z))), 0.5)
+ exp2(log2((((normal.z) * i._Normal.z)*(1-mask.b*0.75) // waves light
+ (normal.w * (1.0 - i._Normal.z))*(1-mask.b*0.75) // waves light
)* i._Normal.w ) * 3 ) * _SpecularPower ; // narrowing 
color = float4(lerp(color, float4(1, 1, 1, mask.x), mask.x) * mask.yyyy); // blend with foam
color.w *= mask.g; // adjust an alpha to add more colors 
return color;
} ENDCG }} Fallback "Unlit/Texture" }


_IslandMask – use 4 channel:

.r –  use this for select an areas where do you want to have the foam.

.g – use this for light and color information

.b – use this for select an areas where do you want to have the waves.

.a – for alpha

_WaveNormal - use this for a wave, color and specular effects.

This is an example, you can make your own texture, also you need to add foam.


Setting example


Amulet Of Dreams


Classik HOPA game! You will have help on this path: the Amulet of Worlds to serve as a portal connecting the edges of reality, and a small Goblin to squeeze into places no human could.

Game Designer, Product Manager, Scripter, QA

Diffuse approximation for consoles

Final Oren-Nayar diffuse with Fresnel approximaton from research.tri-ace.com (0) in HLSL.

	// a = roughness				//
	// f0 = fresnel   				//
	// LdN = dot( light_direction, surface_normal ) //
	// VdN = dot( view_direction, surface_normal )  //
	// LdV = dot( light_direction, view_direction ) //
	float fTheta = f0+dot((1.0f-f0), pow(2.0f, -9.60232*pow(VdH,8.0f)-8.58092*VdH));
	float Bp = 0.0f;
	float Bpl = LdV-(VdN*LdN);	
	float Fr = (1.0f-(0.542026f*(a*a)+0.303573f*a)/((a*a)+1.36053f))*
	float Lm = (max(1.0f-(2.0f*a),0.0f)*(1.0f-(1.0f-pow(LdN,5.0f)))+min(2.0f*a,1.0f))*((1.0f-0.5f*a)*LdN+(0.5*a)*(pow(LdN,2.0f)));
	float Vd = ((a*a)/(((a*a)+0.09f)*(1.31072f+0.995584f*VdN)))*(1.0f-(pow(1.0f-LdN,(1.0f-0.3726732*(VdN*VdN))/(0.188566f+0.38841*VdN))));
	if (Bpl < 0.0f)
		Bp = 1.4f*VdN*LdN*(LdV-(VdN*LdN));
		Bp = LdV-((VdN*LdN));
	float aDiffuse = (21.0f/(20.0f*pi))*(1.0f-f0)*((Fr*Lm)+(Vd*Bp));

HDR Image assemble

We have several LDR with different exposure and we need to make a one HDR image.

Let’s nubmer of this LDR images will be
The exposure time of image j
\Delta t_j

Let response of a point on the sensor element will be the exposure X. We will base on the principle of reversibility, which is a physical property electronic imaging systems, based this exposure can be defined:  $$ E * \Delta t \\ E \text{- Illumination,}\: \Delta T \text{- time} $$

Unfortunatelly pixel value in photo not equal X. Lets make:
\text{pixel value}\: i,\: \text{in image} j = Z_{ij}
$$ Z_{ij} = f(X) = f(E_i\Delta t_j)
f – camera response function (0) – converts the exposure to the pixel value

We can find this function or it can be assumed to match the sRGB standard, gamma correction curve with
\gamma = 2.2
In this case (where we now this function)
f^-1(Z_{ij}) = E_i\Delta t_j
E_j = \frac{f^-1(Z_{ij})}{\Delta t}
At this moment we have a problem, it’s impossible to restore luminance using one image, because we will lose information in underexposed and overexposed pixels.
In other words we need to take information from all images with different weighting.
w(Z)\: -\: \text{function used to attenuate the contribution of poorly exposed pixels}
E_i = \frac{\sum\limits_{j=1}^n(\frac{w(Z_{ij}f^-1(Z_{ij}}{\Delta t})}{\sum\limits_{j=1}^nw(Z_{ij})}

We assume that the LDR images are captured in sRGB so we can use the Luminance standard computation wich I describe earlier(1)
Z_{ij} \in [0;1] $$

float weight1 ( float lum) 
float res = 1.0 - pow((2.0 * lum - 1.0), 12.0);
return res;