# Knots and Polyhedra

Over at Fractal Forums, DarkBeam came up with a Distance Estimator for a trefoil knot in this thread. Here are a few samples, created using the new ‘soft’ raytracer, I’m working on in Fragmentarium:

These kinds of knots are easy to describe by a parametrized curve, but making a distance estimator for them is impressive – I wouldn’t have guessed it was possible at all.

It is also possible to create several variations:

In the same thread, Knighty came up with an impressive figure-8 knot distance estimator:

In another thread at Fractal Forums, Knighty also published an interesting technique (“Fold and Cuts”) for creating a large variety of distance estimated polyhedra:

(If you wonder about the materials, I’ve added some 3D Perlin noise to the distance estimate – this is simple way to creature a structural texture, and it creates true displacements, not just surface normal perturbations).

The threads linked to above contains Fragmentarium scripts with the relevant distance estimators.

# Pseudo-Kleinian Blue

Developing a new progressive (distributed) ray tracer in Fragmentarium to better capture soft lighting phenomena (depth-of-field, ambient occlusion, soft shadows, HDR).

The fractal system is a Pseudo-Kleinian DE by Knighty (originating from Theli-at).

# Fragmentarium FAQ

Here is a small collection of questions, I’ve have been asked from time to time about Fragmentarium.

## Why does Fragmentarium crash?

The most common cause for this is, that a drawing operation takes too long to complete. On Windows, there is a two seconds maximum time limit on jobs executing on the GPU. After that, a GPU watchdog timer will kill the graphics driver and restore it (resulting in unresponsive, possible black, display for 5-10 seconds). The host process (here Fragmentarium) will also be shut down by Windows.

If this happens, you will get errors like this:

"The NVIDIA OpenGL driver lost connection with the display driver
and is unable to continue.
The application must close.... Error code: 8"
"Display driver stopped responding and has recovered"


Workarounds:
Try lowering the number of ray steps, the number of iterations or use the preview feature. Notice that for high-resolution renders, the time limit is for each tile, so it is still possible to do high resolution images.

Another solution is to change the watchdog timer behaviour via the TDR Registry Keys:

The TDR registry keys were not defined in my registry, but I added a

DWORD TdrDelay = 30

and a

DWORD TdrDdiDelay = 30

key to

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GraphicsDrivers

which sets a 30 second window for GPU calculations. You have to restart Windows to apply these settings. Be advised that changing these settings may render your system completely unresponsive if the GPU crashes.

## Why does Fragmentarium not work on my GPU?

Even though GLSL is a well-defined standard, the are differences between different vendor implementations and different drivers. The computers I have use have Nvidia cards, so Fragmentarium is most tested on Nvidia’s platform.

Typical problems:

The ATI compiler is more strict about casting. For instance, adding an integer to a float results in an error, and not in a warning:

float a = b + 3;


The ATI compiler also seems to suffer from some loop optimization problems: some of the fractals in Fragmentarium does not work on my ATI card, if the number of iterations is not locked (or hard-coded in the fragment code). Inserting a condition into the loop also solves the problem.

The Intel GPU compiler also has some issues: for instance, some operations on literal constants results in errors:

vec3 up = normalize(vec3(1.0,0.0,0.0)); // fails on Intel


## I get weird errors about disabled widgets?

If you see warnings like:

Unable to find 'FloorNormal' in shader program. Disabling widget.


it does not indicate a problem. For instance, the warning above appears when the EnableFloor checkbox is locked and disabled. In this case, the GLSL will optimize the floor code away, and the FloorNormal uniform variable will no longer be part of the compiled program – hence the warning. These warnings can be safely ignored.

## Why does your Fragmentarium images look much nicer than the ones I get in Fragmentarium?

The images I post are always rendered at a higher resolution using the High Resolution Render option. I then downscale the images to a lower resolution. This reduces alias and rendering artifacts. Use a painting program with a proper downscaling filter – I use Paint.NET which seems to work okay.

Before rendering in hi-res, use the Tile Preview to zoom in, and adjust the details level and number of ray steps, so the image looks okay in the Tile Preview resolution.

## Why is Fragmentarium slower than BoxPlorer/Fractals.io/…?

The default ray tracer in Fragmentarium has grown somewhat complex. It is possible to gain speed by locking variables, but this is somewhat tedious. Another solution is to change to another raytracer, e.g. change

#include "DE-Raytracer.frag"


to either

#include "Fast-Raytracer.frag"


(a faster version of the ones above, which will remember all settings)

or

#include "Subblue-Raytracer.frag"


(Tom Beddards raytracer, which uses a set of different parameters)

## Can I use double precision in my shaders?

Most modern graphics cards support double precision numbers in hardware, so in principle, yes, if your card supports it. In practice, it is much more difficult:

First, the Fragmentarium presets and sliders (including camera settings), will only transfer data (uniforms) to the GPU as single precision floats. This is not the biggest problem, since you might only need double precision for the numbers that accumulate errors. The Qt OpenGL wrappers I use doesn’t support double types, but it would be possible to work around this if needed.

Second, while newer GLSL versions do support double precision numbers (through types such as double, dvec3, dmat3), not all of the built-in functions support them. In particular, there are no trigonometric or exponential functions. So no cos(double), exp(double), etc. The available functions are described here.

Third, it might be slow: When Nvidia designed their Fermi architecture, used in their recent graphic cards, they built it, so that is should be able to process double precision operations with half the speed of single precision operations (which is optimal given the double size of the numbers). However, they decided that their consumer branch (the Nvidia cards) should be artificially limited to run double precision calculations at 1/8 the speed of single precision numbers. Their Tesla line of graphics card (which shares architecture with the Geforce branch), is not artifically throttled and will run double precision at half the speed of single precision. As for the AMD/ATI cards, I do not think they have similar limitations, but I’m not sure about this.

If you really still want to try, you must insert this command at the top of the script:

#extension GL_ARB_gpu_shader_fp64 : enable


(Or use a #version command, but this will like cause problems with most of the existing examples).

Finally, what about emulating double precision numbers, instead of using the hardware versions? While this sounds very slow, it is probably not much slower than the throttled implementation. The downside is, the GLSL does not support operator overloading: there is no syntactically nice way to implement such functionality: instead of just changing your data types, you must convert all code from e.g.

A = B*C+D;


to

A = dplus(dmul(B,C),D);

If you are still interested, here is a great introduction to emulating doubles in GLSL.

## How do I report errors, so that it is easiest for you to correct them?

(Well, actually nobody asks this questions)

If you find an error, please report the following:
– Operating system, and the graphics card you are using
– The version of Fragmentarium, and whether you built it yourself
– A reproducible description of the steps that caused the error (if possible).

You may mail errors to me at mikael (at) hvidtfeldts.net.

# Fragmentarium 0.9 Released

I’ve released a new build of Fragmentarium, version 0.9 (“Sun Ra”).

(As of now only Windows builds are available)

### New features

• New high resolution output dialog (with auto-naming and auto-backup of scripts and parameters)
• Locking of parameters (makes rendering considerably faster in some cases by turning uniforms into constants)

### Minor improvements

• Mouse wheel now changes camera position, not FOV
• Default raytracer improvements:
• Removed ‘AntiAliasBlur’,’MaxRayStepDiv’, and ‘NormalDetail’ settings
• Added dithering (for banding removal) and very simple RNG functionality.
• Added new ambient occlusion method (similar to Rrola/Subblue – sample along normal for proximity).
• Added a simpler, but faster version of my raytracer. Just include “Fast-Raytracer.frag” instead of “DE-Raytracer.frag”
• Added a port of Subblues Fractal Labs (fractal.io) raytracer (GPL). Just include “Subblue-Raytracer.frag” instead of “DE-Raytracer.frag”
• Added vertical scroll bars to user parameter groups.
• Added ‘#define providesInit’, ‘#define providesColor’ (to provide custom coloring)
• Added vec4 GUI slider type
• Now disables uniforms which are not used in shader.
• Compiler warnings are now shown in output, also if there is no errors.
• Bugfix: resize of window now updates aspect ratio.
• Bugfix: handling of specular exponent = 0
• BugFix: fixed fps timer for >500ms renders
• BugFix: Tile Render now works in manual mode.
• BugFix: Using mouse and key movement at the same time, would result in distance between eye and target getting smaller and smaller.

### New fragments

• A whole collection of new fragments from Knighty!
• Some of my own new fragments: GraphPlotter, KaliSet, BurningShip, Spudsville, Mandelbrot2D, Terrain.
• 2DJulia.frag and Complex.frag to make 2D fractals easier.
• New raytracers: Subblue-Raytracer.frag, Fast-Raytracer.frag
• QuilezLib with Iñigo Quilez’s collection of sphere tracing primitives
• Ashima-Noise.frag, a library of Noise functions.

Some of the new utility libraries make fractals very easy to explore. For instance, 2D escape time fractals are very easy to explore now.

Here is an example:

#include "2DJulia.frag"
#include "Complex.frag"

// 'Ducks' fractal by Samuel Monnier
vec2 formula(vec2 z, vec2 c) {
return cLog(vec2(z.x,abs(z.y)))+c;
}


which produces this:

Other examples:

Terrain example – example of the new noise library

Mandelbrot heightmap example – based on Knighty’s example

For more example of images generated with the new version, take a look at the Flickr Fragmentarium stream.

### Notice for ATI users

Several fragments fail on ATI cards. This seems to be due to faulty GLSL driver optimizations. A workaround is to lock the ‘iterations’ variable (click the padlock next to it). Adding a bailout check inside the main DE loop (e.g. ‘if (length(z)>1000.0) break;’) also seems to do the job.

# Distance Estimated 3D Fractals (VII): Dual Numbers

Previous parts: part I, part II, part III, part IV, part V, and part VI.

This was supposed to be the last blog post on distance estimated 3D fractals, but then I stumbled upon the dual number formulation, and decided it would blend in nicely with the previous post. So this blog post will be about dual numbers, and the next (and probably final) post will be about hybrid systems, heightmap rendering, interior rendering, and links to other resources.

## Dual Numbers

Many of the distance estimators covered in the previous posts used a running derivative. This concept can be traced back to the original formula for the distance estimator for the Mandelbrot set, where the derivative is described iteratively in terms of the previous values:

$$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$

In the previous post, we saw how the Mandelbox could be described a running Jacobian matrix, and how this matrix could be replaced by a single running scalar derivative, since the Jacobians for the conformal transformations all have a particular simple form (and thanks to Knighty the argument was extended to non-Julia Mandelboxes).

Now, some month ago I stumbled upon automatic differentation and dual numbers, and after having done some tests, I think this a very nice framework to complete the discussion of distance estimators.

So what are these dual numbers? The name might sound intimidating, but the concept is very simple: we extend the real numbers with another component – much like the complex numbers:

$$x = (x_r, x_d) = x_r + x_d \epsilon$$

where $$\epsilon$$ is the dual unit, similar to the imaginary unit i for the complex numbers. The square of a dual unit is defined as: $$\epsilon * \epsilon = 0$$.

Now for any function which has a Taylor series, we have:

$$f(x+dx) = f(x) + f'(x)dx + (f”(x)/2)dx^2 + …$$

If we let $$dx = \epsilon$$, it follows:

$$f(x+\epsilon) = f(x) + f'(x)\epsilon$$

because the higher order terms vanish. This means, that if we evaluate our function with a dual number $$d = x + \epsilon = (x,1)$$, we get a dual number back, (f(x), f'(x)), where the dual component contains the derivative of the function.

Compare this with the finite difference scheme for obtaining a derivative. Take a quadratic function as an example and evaluate its derivative, using a step size ‘h’:

$$f(x) = x*x$$

This gives us the approximate derivative:

$$f'(x) \approx \frac {f(x+h)-f(x)}{h} = \frac { x^2 + 2*x*h + h^2 – x^2 } {h} = 2*x+h$$

The finite difference scheme introduces an error, here equal to h. The error always gets smaller as h gets smaller (as it converges towards to the true derivative), but numerical differentiation introduces inaccuracies.

Compare this with the dual number approach. For dual numbers, we have:

$$x*x = (x_r+x_d\epsilon)*(x_r+x_d\epsilon) = x_r^2 + (2 * x_r * x_d )\epsilon$$.

Thus,

$$f(x_r + \epsilon) = x_r^2 + (2 * x_r)*\epsilon$$

Since the dual component is the derivative, we have f'(x) = 2*x, which is the exact answer.

But the real beauty of dual numbers is, that they make it possible to keep track of the derivative during the actual calculation, using forward accumulation. Simply by replacing all numbers in our calculations with dual numbers, we will end up with the answer together with the derivative. Wikipedia has a very nice article, that explains this in more details: Automatic Differentation. The article also list several arithmetric rules for dual numbers.

For the Mandelbox, we have a defining function R(p), which returns the length of p, after having been through a fixed number of iterations of the Mandelbox formula: scale*spherefold(boxfold(z))+p. The DE is then DE = R/DR, where DR is the length of the gradient of R.

R is a scalar-valued vector function. To find the gradient we need to find the derivative along the x,y, and z direction. We can do this using dual vectors and evaluate the three directions, e.g. for the x-direction, evaluate $$R(p_r + \epsilon (1,0,0))$$. In practice, it is more convenient to keep track of all three dual vectors during the calculation, since we can reuse part of the calculations. So we have to use a 3×3 matrix to track our derivatives during the calculation.

Here is some example code for the Mandelbox:

// simply scale the dual vectors
void sphereFold(inout vec3 z, inout mat3 dz) {
float r2 = dot(z,z);
z*= temp; dz*=temp;
} else if (r2 < fixedRadius2) {
dz[0] =temp*(dz[0]-z*2.0*dot(z,dz[0])/r2);
dz[1] =temp*(dz[1]-z*2.0*dot(z,dz[1])/r2);
dz[2] =temp*(dz[2]-z*2.0*dot(z,dz[2])/r2);
z*=temp; dz*=temp;
}
}

// reverse signs for dual vectors when folding
void boxFold(inout vec3 z, inout mat3 dz) {
if (abs(z.x)>foldingLimit) { dz[0].x*=-1; dz[1].x*=-1; dz[2].x*=-1; }
if (abs(z.y)>foldingLimit)  { dz[0].y*=-1; dz[1].y*=-1; dz[2].y*=-1; }
if (abs(z.z)>foldingLimit)  { dz[0].z*=-1; dz[1].z*=-1; dz[2].z*=-1; }
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
}

float DE(vec3 z)
{
// dz contains our three dual vectors,
// initialized to x,y,z directions.
mat3 dz = mat3(1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0);

vec3 c = z;
mat3 dc = dz;
for (int n = 0; n < Iterations; n++) {
boxFold(z,dz);
sphereFold(z,dz);
z*=Scale;
dz=mat3(dz[0]*Scale,dz[1]*Scale,dz[2]*Scale);
z += c*Offset;
dz +=matrixCompMult(mat3(Offset,Offset,Offset),dc);
if (length(z)>1000.0) break;
}
return dot(z,z)/length(z*dz);
}


The 3×3 matrix dz contains our three dual vectors (they are stored as columns in the matrix, dz[0], dz[1], dz[2]).

In order to calculate the dual numbers, we need to know how to calculate the length of z, and how to divide by the length squared (for sphere folds).

Using the definition of the product for dual numbers, we have:

$$|z|^2 = z \cdot z = z_r^2 + (2 z_r \cdot z_d)*\epsilon$$

For the length, we can use the power rule, as defined on Wikipedia:

$$|z_r + z_d \epsilon| = \sqrt{z_r^2 + (2 z_r \cdot z_d)*\epsilon} = |z_r| + \frac{(z_r \cdot z_d)}{|z_r|}*\epsilon$$

Using the rule for division, we can derive:

$$z/|z|^2=(z_r+z_d \epsilon)/( z_r^2 + 2 z_r \cdot z_d \epsilon)$$
$$= z_r/z_r^2 + \epsilon (z_d*z_r^2-2z_r*z_r \cdot z_d)/z_r^4$$

Given these rules, it is relatively simple to update the dual vectors: For the sphereFold, we either multiply by a real number or use the division rule above. For the boxFold, there is both multiplication (sign change), and a translation by a real number, which is ignored for the dual numbers. The (real) scaling factor is also trivially applied to both real and dual vectors. Then there is the addition of the original vector, where we must remember to also add the original dual vector.

Finally, using the length as derived above, we find the length of the full gradient as:

$$DR = \sqrt{ (z_r \cdot z_x)^2 + (z_r \cdot z_y)^2 + (z_r \cdot z_z)^2 } / |z_r|$$

In the code example, the vectors are stored in a matrix, which makes a more compact notation possible: DR = length(z*dz)/length(z), leading to the final DE = R/DR = dot(z,z)/length(z*dz)

There are some advantages to using the dual numbers approach:

• Compared to the four-point Makin/Buddhi finite difference approach the arbitrary epsilon (step distance) is avoided – which should give better numerical accuracy. It is also somewhat slightly faster computationally.
• Very general – e.g. works for non-conformal cases, where running scalar derivatives fail. The images here are from a Mandelbox where a different scaling factor was applied to each direction (making them non-conformal). This is not possible to capture in a running scalar derivative.

On the other hand, the method is slower than using running scalar estimators. And it does require code changes. It should be mentioned that libraries exists for languages supporting operator overloading, such as C++.

Since we find the gradient directly in this method, we can also use it as a surface normal – this is also an advantage compared to the scalar derivates, which normally use a finite difference scheme for the normals. Using the code example the normal is:

// (Unnormalized) normal
vec3 normal = vec3(dot(z,dz[0]),dot(z,dz[1]),dot(z,dz[2]));


It should be noted that in my experiments, I found the finite difference method produced better normals than the above definition. Perhaps because it smothens them? The problem was somehow solved by backstepping a little before calculating the normal, but this again introduces an arbitrary distance step.

Now, I said the scalar method was faster – and for a fixed number of ray steps it is – but let us take a closer look at the distance estimator function:

The above image shows a sliced Mandelbox.

The graph in the lower right conter shows a plot of the DE function along a line (two dimensions held fixed): The blue curve is the DE function, and the red line shows the derivative of the DE function. The function is plotted for the dual number derived DE function. We can see that our DE is well-behaved here: for a consistent DE the slope can never be higher than 1, and when we move away from the side of the Mandelbox in a perpendicular direction the derivative of the DE should be plus or minus one.

Now compare this to the scalar estimated DE:

Here we see that the DE is less optimal – the slope is ~0.5 for this particular line graph. Actually, the slope would be close to one if we omitted the ‘+1’ term for the scalar estimator, but then it overshoots slightly some places inside the Mandelbox.

We can also see that there are holes in our Mandelbox – this is because for this fixed number of ray steps, we do not get close enough to the fractal surface to hit it. So even though the scalar estimator is faster, we need to crank up the number of ray steps to achieve the same quality.

## Final Remarks

The whole idea of introducing dual derivatives of the three unit vectors seems to be very similar to having a running Jacobian matrix estimator – and I believe the methods are essentially idential. After all we try to achieve the same: keeping a running record of how the R(p) function changes, when we vary the input along the axis.

But I think the dual numbers offer a nice theoretical framework for calculating the DE, and I believe they could be more accurate and faster then finite difference four point gradient methods. However, more experiments are needed before this can be asserted.

Scalar estimators will always be the fastest, but they are probably only optimal for conformal systems – for non-conformal system, it seems necessary to introduce terms that make them too conservative, as demonstrated by the Mandelbox example.

The final part contains all the stuff that didn’t fit in the previous posts, including references and links.

# Distance Estimated 3D Fractals (VI): The Mandelbox

Previous parts: part I, part II, part III, part IV and part V.

After the Mandelbulb, several new types of 3D fractals appeared at Fractal Forums. Perhaps one of the most impressive and unique is the Mandelbox. It was first described in this thread, where it was introduced by Tom Lowe (Tglad). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are considered part of the set.

Tom Lowe has a great site, where he discusses the history of the Mandelbox, and highlights several of its properties, so in this post I’ll focus on the distance estimator, and try to make some more or less convincing arguments about why a scalar derivative works in this case.

The Mandelbulb and Mandelbrot systems use a simple polynomial formula to generate the escape-time sequence:

$$z_{n+1} = z_n^\alpha + c$$

The Mandelbox uses a slightly more complex transformation:

$$z_{n+1} = scale*spherefold(boxfold(z_n)) + c$$

I have mentioned folds before. These are simply conditional reflections.
A box fold is a similar construction: if the point, p, is outside a box with a given side length, reflect the point in the box side. Or as code:

if (p.x>L) { p.x = 2.0*L-p.x; } else if (p.x<-L) { p.x = -2.0*L-p.x; }


(this must be done for each dimension. Notice, that in GLSL this can be expressed elegantly in one single operation for all dimensions: p = clamp(p,-L,L)*2.0-p)

The sphere fold is a conditional sphere inversion. If a point, p, is inside a sphere with a fixed radius, R, we will reflect the point in the sphere, e.g:

float r = length(p);
if (r<R) p=p*R*R/(r*r);


(Actually, the sphere fold used in most Mandelbox implementations is slightly more complex and adds an inner radius, where the length of the point is scaled linearly).

Now, how can we create a DE for the Mandelbox?

Again, it turns out that it is possible to create a scalar running derivative based distance estimator. I think the first scalar formula was suggested by Buddhi in this thread at fractal forums. Here is the code:

float DE(vec3 z)
{
vec3 offset = z;
float dr = 1.0;
for (int n = 0; n < Iterations; n++) {
boxFold(z,dr);       // Reflect
sphereFold(z,dr);    // Sphere Inversion

z=Scale*z + offset;  // Scale & Translate
dr = dr*abs(Scale)+1.0;
}
float r = length(z);
return r/abs(dr);
}


where the sphereFold and boxFold may be defined as:

void sphereFold(inout vec3 z, inout float dz) {
float r2 = dot(z,z);
// linear inner scaling
z *= temp;
dz*= temp;
// this is the actual sphere inversion
z *= temp;
dz*= temp;
}
}

void boxFold(inout vec3 z, inout float dz) {
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
}


It is possible to simplify this even further by storing the scalar derivative as the fourth component of a 4-vector. See Rrrola’s post for an example.

However, one thing that is missing, is an explanation of why this distance estimator works. And even though I do not completely understand the mechanism, I’ll try to justify this formula. It is not a strict derivation, but I think it offers some understanding of why the scalar distance estimator works.

## A Running Scalar Derivative

Let us say, that for a given starting point, p, we obtain an length, R, after having applied a fixed number of iterations. If the length is less then $$R_{min}$$, we consider the orbit to be bounded and is thus part of the fractal, otherwise it is outside the fractal set. We want to obtain a distance estimate for this point p. Now, the distance estimate must tell us how far we can go in any direction, before the final radius falls below the minimum radius, $$R_{min}$$, and we hit the fractal surface. One distance estimate approximation would be to find the direction, where R decreases fastest, and do a linear extrapolation to estimate when R becomes less than $$R_{min}$$:

$$DE=(R-R_{min})/DR$$

where DR is the magnitude of the derivative along this steepest descent (this is essentially Newton root finding).

In the previous post, we argued that the linear approximation to a vector-function is best described using the Jacobian matrix:

$$F(p+dp) \approx F(p) + J(p)dp$$

The fastest decrease is thus given by the induced matrix norm of J, since the matrix norm is the maximum of $$|J v|$$ for all unit vectors v.

So, if we could calculate the (induced) matrix norm of the Jacobian, we would arrive at a linear distance estimate:

$$DE=(R-R_{min})/\|J\|$$

Calculating the Jacobian matrix norm sounds tricky, but let us take a look at the different transformations involved in the iteration loop: Reflections (R), Sphere Inversions (SI), Scalings (S), and Translations (T). It is also common to add a rotation (ROT) inside the iteration loop.

Now, for a given point, we will end applying an iterated sequence of operations to see if the point escapes:

$$Mandelbox(p) = (T\circ S\circ SI\circ R\circ \ldots\circ T\circ S\circ SI\circ R)(p)$$

In the previous part, we argued that the most obvious derivative for a R^3 to R^3 function is a Jacobian. According to the chain rule for Jacobians, the Jacobian for a function such as this Mandelbox(z) will be of the form:

$$J_{Mandelbox} = J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R$$

In general, all of these matrices will be functions of R^3, which should be evaluated at different positions. Now, let us take a look of the individual Jacobian matrices for the Mandelbox transformations.

Translations

A translation by a constant will simply have an identity matrix as Jacobian matrix, as can be seen from the definitions.

Reflections

Consider a simple reflection in one of the coordinate system planes. The transformation matrix for this is:

$$T_{R} = \begin{bmatrix} 1 & & \\ & 1 & \\ & & -1 \end{bmatrix}$$

Now, the Jacobian of a transformation defined by multiplying with a constant matrix is simply the constant matrix itself. So the Jacobian is also simply a reflection matrix.

Rotations

A rotation (for a fixed angle and rotation vector) is also constant matrix, so the Jacobian is also simply a rotation matrix.

Scalings

The Jacobian for a uniform scaling operation is:

$$J_S = scale*\begin{bmatrix} 1 & & \\ & 1 & \\ & & 1 \end{bmatrix}$$

Sphere Inversions

Below can be seen how the sphere fold (the conditional sphere inversion) transforms a uniform 2D grid. As can be seen, the sphere inversion is an anti-conformal transformation – the angles are still 90 degrees at the intersections, except for the boundary where the sphere inversion stops.

The Jacobian for sphere inversions is the most tricky. But a derivation leads to:

$$J_{R} = (r^2/R^2) \begin{bmatrix} 1-2x^2/R^2 & -2xy/R^2 & -2xz/R^2 \\ -2yx/R^2 & 1-2y^2/R^2 & -2yz/R^2 \\ -2zx/R^2 & -zy/R^2 & 1-2z^2/R^2 \end{bmatrix}$$

Here R is the length of p, and r is radius of the inversion sphere. I have extracted the scalar front factor, so that the remaining part is an orthogonal matrix (as is also demonstrated in the derivation link).

Notice that all reflection, translation, and rotation Jacobian-matrices will not change the length of a vector when multiplied with it. The Jacobian for the Scaling matrix, will multiply the length with the scale factor, and the Jacobian for the Sphere Inversion will multiply the length by a factor of (r^2/R^2) (notice that the length of the point must evaluated at the correct point in the sequence).

Now, if we calculated the matrix norm of the Jacobian:

$$|| J_{Mandelbox} || = || J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R ||$$

we can easily do it, since we do only need to keep track of the scalar factor, whenever we encounter a Scaling Jacobian or a Sphere Inversion Jacobian. All the other matrix stuff will simply not change the length of a given vector and may be ignored. Also notice, that only the sphere inversion depends on the point where the Jacobian is evaluated – if this operation was not present, we could simply count the number of scalings performed and multiply the escape length with $$2^{-scale}$$.

This means that the matrix norm of the Jacobian can be calculated using only a simply scalar variable, which is scaled, whenever we apply the scaling or sphere inversion operation.

This seems to hold for all conformal transformations (strictly speaking sphere inversions and reflections are not conformal, but anti-conformal, since orientations are reversed). Wikipedia also mentions, that any function, with a Jacobian equal to a scalar times a rotational matrix, must be conformal, and it seems the converse is also true: any conformal or anti-conformal transformation in 3D has a Jacobian equal to a scalar times a orthogonal matrix.

## Final Remarks

There are some reasons, why I’m not completely satisfied why the above derivation: first, the translational part of the Mandelbox transformation is not really a constant. It would be, if we were considering a Julia-type Mandelbox, where you add a fixed vector at each iteration, but here we add the starting point, and I’m not sure how to express the Jacobian of this transformation. Still, it is possible to do Julia-type Mandelbox fractals (they are quite similar), and here the derivation should be more sound. The transformations used in the Mandelbox are also conditional, and not simple reflection and sphere inversions, but I don’t think that matters with regard to the Jacobian, as long as the same conditions are used when calculating it.

Update: As Knighty pointed out in the comments below, it is possible to see why the scalar approximation works in the Mandelbrot case too:

Let us go back to the original formula:
$$f(z) = scale*spherefold(boxfold(z)) + c$$

and take a look at its Jacobian:
$$J_f = J_{scale}*J_{spherefold}*J_{boxfold} + I$$

Now by using the triangle inequality for matrix norms, we get:
$$||J_f|| = ||J_{scale}*J_{spherefold}*J_{boxfold} + I||$$
$$\leq ||J_{scale}*J_{spherefold}*J_{boxfold}|| + ||I||$$
$$= S_{scale}*S_{spherefold}*S_{boxfold} + 1$$

where the S’s are the scalars for the given transformation. This argument can also be applied to repeated applications of the Mandelbox transformation. This means, that if we add one to the running derivative at each iteration (like in the Mandelbulb case), we get an upper bound of the true derivative. And since our distance estimate is calculated by dividing with the running derivate, this approximation yields a smaller distance estimate than the true one (which is good).

Another point is, that it is striking that we end up with the same scalar estimator as for the tetrahedron in part 3 (except that is has no sphere inversion). But for the tetrahedron, the scalar estimator was based on straight-forward arguments, so perhaps it is possible to come up with a much simpler argument for the running scalar derivative for the Mandelbox as well.

There must also be some kind of link between the gradient and the Jacobian norm. It seems, that the norm of the Jacobian should be equal to the absolute value of the length of the Mandelbox(p) function: ||J|| = |grad|MB(p)||, since they both describe how fast the length varies along the steepest descent path. This would also make the link to the gradient based numerical methods (discussed in part 5) more clear.

And finally, if we reused our argumentation for using a linear zero-point approximation of the escape length to the Mandelbulb, it just doesn’t work. Here it is necessary to introduce a log-term ($$DE= 0.5*r*log(r)/dr$$). Of course, the Mandelbulb is not composed of conformal transformations, so the “Jacobian to Scalar running derivative” argument is not valid anymore, but we already have an expression for the scalar running derivative for the Mandelbulb, and this expression does not seem to work well with the $$DE=(r-r_{min})/dr$$ approximation. So it is not clear under what conditions this approximation is valid. Update: Again, Knighty makes some good arguments below in the comments for why the linear approximations holds here.

The next part is about dual numbers and distance estimation.

# Distance Estimated 3D Fractals (V): The Mandelbulb & Different DE Approximations

Previous posts: part I, part II, part III and part IV.

The last post discussed the distance estimator for the complex 2D Mandelbrot:

(1)  $$DE=0.5*ln(r)*r/dr$$,

with ‘dr’ being the length of the running (complex) derivative:

(2)  $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$

In John Hart’s paper, he used the exact same form to render a Quaternion system (using four-component Quaternions to keep track of the running derivative). In the paper, Hart never justified why the complex Mandelbrot formula also should be valid for Quaternions. A proof of this was later given by Dang, Kaufmann, and Sandin in the book Hypercomplex Iterations: Distance Estimation and Higher Dimensional Fractals (2002).

I used the same distance estimator formula, when drawing the 3D hypercomplex images in the last post – it seems to be quite generic and applicable to most polynomial escape time fractal. In this post we will take a closer look at how this formula arise.

## The Mandelbulb

But first, let us briefly return to the 2D Mandelbrot equation: $$z_{n+1} = z_{n}^2+c$$. Now, squaring complex numbers has a simple geometric interpretation: if the complex number is represented in polar coordinates, squaring the number corresponds to squaring the length, and doubling the angle (to the real axis).

This is probably what motivated Daniel Nylander (Twinbee) to investigate what happens when turning to spherical 3D coordinates and squaring the length and doubling the two angles here. This makes it possible to get something like the following object:

On the image above, I made made some cuts to emphasize the embedded 2D Mandelbrot.

Now, this object is not much more interesting than the triplex and Quaternion Mandelbrot from the last post. But Paul Nylander suggested that the same approach should be used for a power-8 formula instead: $$z_{n+1} = z_{n}^8+c$$, something which resulted in what is now known as the Mandelbulb fractal:

The power of eight is somewhat arbitrary here. A power seven or nine object does not look much different, but unexpectedly these higher power objects display a much more interesting structure than their power two counterpart.

Here is some example Mandelbulb code:

float DE(vec3 pos) {
vec3 z = pos;
float dr = 1.0;
float r = 0.0;
for (int i = 0; i < Iterations ; i++) {
r = length(z);
if (r>Bailout) break;

// convert to polar coordinates
float theta = acos(z.z/r);
float phi = atan(z.y,z.x);
dr =  pow( r, Power-1.0)*Power*dr + 1.0;

// scale and rotate the point
float zr = pow( r,Power);
theta = theta*Power;
phi = phi*Power;

// convert back to cartesian coordinates
z = zr*vec3(sin(theta)*cos(phi), sin(phi)*sin(theta), cos(theta));
z+=pos;
}
return 0.5*log(r)*r/dr;
}


It should be noted, that several versions of the geometric formulas exists. The one above is based on doubling angles for spherical coordinates as they are defined on Wikipedia and is the same version as Quilez has on his site. However, several places this form appears:

float theta = asin( z.z/r );
float phi = atan( z.y,z.x );
...
z = zr*vec3( cos(theta)*cos(phi), cos(theta)*sin(phi), sin(theta) );


which results in a Mandelbulb object where the poles are similar, and where the power-2 version has the nice 2D Mandelbrot look depicted above.

I’ll not say more about the Mandelbulb and its history, because all this is very well documented on Daniel White’s site, but instead continue to discuss various distance estimators for it.

So, how did we arrive at the distance estimator in the code example above?

Following the same approach as for the 4D Quaternion Julia set, we start with our iterative function:

$$f_n(c) = f_{n-1}^8(c) + c, f_0(c) = 0$$

Deriving this function (formally) with respect to c, gives

(3)   $$f’_n(c) = 8f_{n-1}^7(c)f’_{n-1}(c)+1$$

where the functions above are ‘triplex’ (3 component) valued. But we haven’t defined how to multiply two spherical triplex numbers. We only know how to square them! And how do we even derive a vector valued function with respect to a vector?

## The Jacobian Distance Estimator

Since we have three different function components, which we can derive with three different number components, we end up with nine possible scalar derivatives. These may be arranged in a Jacobian matrix:

$$J = \begin{bmatrix} \frac {\partial f_x}{\partial x} & \frac {\partial f_x}{\partial y} & \frac {\partial f_x}{\partial z} \\ \frac {\partial f_y}{\partial x} & \frac {\partial f_y}{\partial y} & \frac {\partial f_y}{\partial z} \\ \frac {\partial f_z}{\partial x} & \frac {\partial f_z}{\partial y} & \frac {\partial f_z}{\partial z} \end{bmatrix}$$

The Jacobian behaves like similar to the lower-dimensional derivatives, in the sense that it provides the best linear approximation to F in a neighborhood of p:

(4)   $$F(p+dp) \approx F(p) + J(p)dp$$

In formula (3) above this means, we have to keep track of a running matrix derivative, and use some kind of norm for this matrix in the final distance estimate (formula 1).

But calculating the Jacobian matrix above analytically is tricky (read the comments below from Knighty and check out his running matrix derivative example in the Quadray thread). Luckily, other solutions exist.

Let us start by considering the complex case once again. Here we also have a two-component function derived by a two-component number. So why isn’t the derivative of a complex number a 2×2 Jacobian matrix?

It turns out that for a complex function to be complex differentiable in every point (holomorphic), it must satisfy the Cauchy Riemann equations. And these equations reduce the four quantities in the 2×2 Jacobian to just two numbers! Notice, that the Cauchy Riemann equations are a consequence of the definition of the complex derivative in a point p: we require that the derivative (the limit of the difference) is the same, no matter from which direction we approach p (see here). Very interestingly, the holomorphic functions are exactly the functions that are conformal (angle preserving) – something which I briefly mentioned (see last part of part III) is considered a key property of fractal transformations.

What if we only considered conformal 3D transformation? This would probably imply that the Jacobian matrix of the transformation would be a scalar times a rotation matrix (see here, but notice they only claim the reverse is true). But since the rotational part of the matrix will not influence the matrix norm, this means we would only need to keep track of the scalar part – a single component running derivative. Now, the Mandelbulb power operation is not a conformal transformation. But even though I cannot explain why, it is still possible to define a scalar derivative.

## The Scalar Distance Estimator

It turns out the following running scalar derivative actually works:

(5)   $$dr_n = 8|f_{n-1}(c)|^7dr_{n-1}+1$$

where ‘dr’ is a scalar function. I’m not sure who first came up with the idea of using a scalar derivative (but it might be Enforcer, in this thread.) – but it is interesting, that it works so well (it also works in many other cases, including for Quaternion Julia system). Even though I don’t understand why the scalar approach work, there is something comforting about it: remember that the original Mandelbulb was completely defined in terms of the square and addition operators. But in order to use the 3-component running derivative, we need to able to multiply two arbitrary ‘triplex’ numbers! This bothered me, since it is possible to draw the Mandelbulb using e.g. a 3D voxel approach without knowing how to multiply arbitrary numbers, so I believe it should be possible to formulate a DE-approach, that doesn’t use this extra information. And the scalar approach does exactly this.

## The escape length gradient approximation

(1)  $$DE=0.5*ln(r)*r/dr$$,

The most interesting part is the running derivative ‘dr’. For the fractals encountered so far, we have been able to find analytical running derivatives (both vector and scalar valued), but as we shall see (when we get to the more complex fractals, such as the hybrid systems) it is not always possible to find an analytical formula.

Remember that ‘dr’ is the length of the f'(z) (for complex and Quaternion numbers). In analogy with the complex and quaternion case, the function must be derived with regard to the 3-component number c. Deriving a vector-valued function with regard to a vector quantity suggests the use of a Jacobian matrix. Another approach is to take the gradient of the escape length: $$dr=|\nabla |z_{n}||$$ – while it is not clear to me why this is valid, it work in many cases as we will see:

David Makin and Buddhi suggested (in this thread) that instead of trying to calculate a running, analytical derivative, we could use an numerical approximation, and calculate the above mentioned gradient using the finite forwarding method we also used when calculating a surface normal in post II.

The only slightly tricky point is, that the escape length must be evaluated for the same iteration count, otherwise you get artifacts. Here is some example code:

int last = 0;
float escapeLength(in vec3 pos)
{
vec3 z = pos;
for( int i=1; i Bailout && last==0) || (i==last))
{
last = i;
return length(z);
}
}
return length(z);
}

float DE(vec3 p) {
last = 0;
float r = escapeLength(p);
if (r*r<Bailout) return 0.0;
}


Notice the use of the ‘last’ variable to ensure that all escapeLength’s are evaluated at the same iteration count. Also notice that ‘gradient’ is a global varible – this is because we can reuse the normalized gradient as an approximation for our surface normal and save some calculations.

The approach above is used in both Mandelbulber and Mandelbulb 3D for the cases where no analytical solution is known. On Fractal Forums it is usually refered to as the Makin/Buddhi 4-point Delta-DE formula.

Now we need to step back and take a closer look at the origin of the Mandelbrot distance estimation formula. There is a lot of confusion about this formula, and unfortunately I cannot claim to completely understand all of this myself. But I’m slowly getting to understand bits of it, and want to share what I found out so far:

Let us start by the original Hart paper, which introduced the distance estimation technique for 3D fractals. Hart does not derive the distance estimation formula himself, but notes that:

Now, I haven’t talked about this potential function, G(z), that Hart mentions above, but it is possible to define a potential with the properties that G(Z)=0 inside the Mandelbrot set, and positive outside. This is the first thing that puzzled me: since G(Z) tends toward zero near the border, the “log G(Z)” term, and hence the entire term will become negative! As it turns out, the “log” term in the Hart paper is wrong. (And also notice that his formula (8) is wrong too – he must take the norm of complex function f(z) inside the log function – otherwise the distance will end up being complex too)

In The Science of Fractal Images (which Hart refers to above) the authors arrive at the following formula, which I believe is correct:

Similar, in Hypercomplex Iterations the authors arrive at the same formula:

But notice that formula (3.17) is wrong here! I strongly believe it misses a factor two (in their derivation they have $$sinh(z) \approx \frac{z}{2}$$ for small z – but this is not correct: $$sinh(z) \approx z$$ for small z).

The approximation going from (3.16) to (3.17) is only valid for points close to the boundary (where G(z) approaches zero). This is no big problem, since for points far away we can restrict the maximum DE step, or put the object inside a bounding box, which we intersect before ray marching.

It can be shown that $$|Z_n|^{1/2^n} \to 1$$ for $$n \to \infty$$. By using this we end up with our well-known formula for the lower bound from above (in a slightly different notation):

(1)  $$DE=0.5*ln(r)*r/dr$$,

Instead of using the above formula, we can work directly with the potential G(z). For $$n \to \infty$$, G(z) may be approximated as $$G(z)=log(|z_n|)/power^n$$, where ‘power’ is the polynomial power (8 for Mandelbulb). (This result can be found in e.g. Hypercomplex Iterations p. 37 for quadratic functions)

We will approximate the length of G'(z) as a numerical gradient again. This can be done using the following code:

float potential(in vec3 pos)
{
vec3 z = pos;
for(int i=1; i Bailout) return log(length(z))/pow(Power,float(i));
}
return 0.0;
}

float DE(vec3 p) {
float pot = potential(p);
if (pot==0.0) return 0.0;
}


Notice, that this time we do not have to evaluate the potential for the same number of iterations. And again we can store the gradient and reuse it as a surface normal (when normalized).

A variant using Subblue’s radiolari tweak

Quilez’ Approximation

I arrived at the formula above after reading Iñigo Quilez’ post about the Mandelbulb. There are many good tips in this post, including a fast trigonometric version, but for me the most interesting part was his DE approach: Quilez used a potential based DE, defined as:

$$DE(z) = \frac{G(z)}{|G'(z)|}$$

This puzzled me, since I couldn’t understand its origin. Quilez offers an explanation in this blog post, where he arrives at the formula by using a linear approximation of G(z) to calculate the distance to its zero-region. I’m not quite sure, why this approximation should be justified, but it seems a bit like an example of Newtons method for root finding. Also, as Quilez himself notes, he is missing a factor 1/2.

But if we start out by formula (3.17) above, and notes that $$sinh(G(z)) \approx G(z)$$ for small G(z) (near the fractal boundary) , and that $$|Z_n|^{1/2^n} \to 1$$ for $$n \to \infty$$ we arrive at:

$$DE(z) = 0.5*\frac{G(z)}{|G'(z)|}$$

(And notice that the same two approximations are used when arriving at our well-known formula (1) at the top of the page).

Quilez’ method can be implemented using the previous code example and replacing the DE return value simply by:

return 0.5*pot/length(gradient);


If you wonder how these different methods compare, here are some informal timings of the various approaches (parameters were adjusted to give roughly identical appearances):

Sinh Potential Gradient (my approach) 1.0x
Analytical 4.1x

The three first methods all use a four-point numerical approximation of the gradient. Since this requires four calls to the iterative function (which is where most of the computational time is spent), they are around four times slower than the analytical solution, that only uses one evaluation.

My approach is slightly slower than the other numerical approaches, but is also less approximated than the others. The numerical approximations do not behave in the same way: the Makin/Buddhi approach seems more sensible to choosing the right EPS size in the numerical approximation of the gradient.

As to which function is best, this requires some more testing on various systems. My guess is, that they will provide somewhat similar results, but this must be investigated further.

The Mandelbulb can also be drawn as a Julia fractal.

## Some final notes about Distance Estimators

Mathematical justification: first note, that the formulas above were derived for complex mathematics and quadratic systems (and extended to Quaternions and some higher-dimensional structures in Hypercomplex Iterations). These formulas were never proved for exotic stuff like the Mandelbulb triplex algebra or similar constructs. The derivations above were included to give a hint to the origin and construction of these DE approximations. To truly understand these formula, I think the original papers by Hubbard and Douady, and the works by John Willard Milnor should be consulted – unfortunately I couldn’t find these online. Anyway, I believe a rigorous approach would require the attention of someone with a mathematical background.

Using a lower bound as a distance estimator. The formula (3.17) above defines lower and upper bounds for a given point to the boundary of the Mandelbrot set. Throughout this entire discussion, we have simply used the lower bound as a distance estimate. But a lower bound is not good enough as a distance estimate. This can easily be realized since 0 is always a lower bound of the true distance. In order for our sphere tracing / ray marching approach to work, the lower bound must converge towards to the true distance, as it approaches zero! In our case, we are safe, because we also have an upper bound which is four times the lower bound (in the limit where the exp(G(z)) term disappears). Since the true distance must be between the lower and upper bound, the true distance converges towards the lower bound, as the lower bound get smaller.

DE’s are approximations. All our DE formulas above are only approximations – valid in the limit $$n \to \infty$$, and some also only for point close to the fractal boundary. This becomes very apparent when you start rendering these structures – you will often encounter noise and artifacts. Multiplying the DE estimates by a number smaller than 1, may be used to reduce noise (this is the Fudge Factor in Fragmentarium). Another common approach is to oversample – or render images at large sizes and downscale.

Future directions. There is much more to explore and understand about Distance Estimators. For instance, the methods above use four-point numerical gradient estimation, but perhaps the primary camera ray marching could be done using directional derivatives (two point delta estimation), and thus save the four-point sampling for the non-directional stuff (AO, soft shadows, normal estimation). Automatic differentation with dual numbers (as noted in post II) may also be used to avoid the finite difference gradient estimation. It would be nice to have a better understanding of why the scalar gradients work.

The next blog post discusses the Mandelbox fractal.

# Distance Estimated 3D Fractals (IV): The Holy Grail

Previous posts: part I, part II, and part III.

Despite its young age, the Mandelbulb is probably the most famous 3D fractal in existence. This post will examine how we can create a Distance Estimator for it. But before we get to the Mandelbulb, we will have to step back and review a bit of the history behind it.

## The Search for the Holy Grail

The original Mandelbrot fractal is a two dimensional fractal based on the convergence properties of a series of complex numbers. The formula is very simple: for any complex number z, check whether the sequence iteratively defined by $$z_{n+1} = z_{n}^2+c$$ diverges or not. The Mandelbrot set is defined as the set of points which do not diverge, that is, the points with a series that stays bounded within a given radius. The results can be depicted in the complex plane.

The question is how to extend this to three dimensions. The Mandelbrot set fits two dimensions, because complex numbers have two components. Can we find a similar number system for three dimensions?

The Mandelbrot formula involves two operations: adding numbers, and squaring them. Creating a n-component number where addition is possible is easy. This is what mathematicians refer to as a vector space. Component-wise addition will do the trick, and seems like the logical choice.

But the Mandelbrot formula also involves squaring a number, which requires a multiplication operator (a vector product) on the vector space. A vector space with a (bilinear) vector product is called an algebra over a field. The numbers in these kind of vector spaces are often called hypercomplex numbers.

To see why a three dimensional number system might be problematic, let us try creating one. We could do this by starting out with the complex numbers and introduce a third component, j. We will try to keep as many as possible of the characteristic properties of the complex and real numbers, such as distributivity, $$a*(b+c)=(a*b)+(a*c)$$, and commutativity, $$a*b=b*a$$. If we assume distributivity, we only need to specify how the units of the three components multiply. This can be illustrated in a multiplication table. Since we also assumed commutativity, such a table must be symmetric:

$$\begin{bmatrix} & \boldsymbol{1} & \boldsymbol{i} & \boldsymbol{j} \\ \boldsymbol{1} & 1 & i & j \\ \boldsymbol{i} & i & -1 & ?\\ \boldsymbol{j} & j & ? & ? \end{bmatrix}$$

For a well-behaved number system, anything multiplied by 1 is still one, and if we now require the real and imaginary components to behave as for the complex numbers, we only have three components left – the question marks in the matrix. I’ve rendered out a few of the systems, I encountered while trying arbitrary choices of the missing numbers in the matrix:

(Many people have explored various 3D component multiplication tables – see for instance Paul Nylander’s Hypercomplex systems for more examples).

Unfortunately, our toy system above fails to be associative (i.e. it is not always true, that $$a*(b*c) = (a*b)*c$$), as can be seen by looking at the equation $$i*(i*j) = (i*i)*j \Rightarrow i*x = -j$$, which can not be satisfied no matter how we choose x.

It turns out that it is difficult to create a consistent number system in three dimensions. There simply is no natural choice. In fact, if we required that our number system allowed for a division operator, there is a theorem stating that only four such mathematical spaces are possible: the real numbers (1D), the complex numbers (2D), the quaternions (4D) and the octonions (8D). But no 3D systems.

But what about the 4D Quaternions? Back in 1982, Alan Norton published a paper showing a Quaternion Julia set made by displaying a 3D “slice” of the 4D space. Here is an example of a Quaternion Julia fractal:

Of course, in order to visualize a 4D object, you have to make some kind of dimensional reduction. The most common approach is to make a 3D cross-section, by simply keeping one of the four components at a fixed value.

If you wonder why you never see a Quaternion Mandelbrot image, the reason is simple. It is not very interesting because of its axial symmetry:

If you, however, make a rotation inside the iteration loop, you can get something more like a 3D Mandelbrot.

The Quaternion system (and the 3D hypercomplex systems above) are defined exactly as the 2D system – by checking if $$z_{n+1} = z_{n}^2+c$$ converges or not.

But how do we draw a 3D image of these fractals? In contrast to the 2D case, where it is possible to build a 2D grid, and check inside each cell, building a 3D grid and checking each cell would be far too memory and time consuming for images in any decent resolution.

## A distance estimator for quadratic systems.

While Alan Norton used a different rendering approach, a very elegant solution to this was found by John Hart et al in a 1989 paper: distance estimated rendering. As discussed in the previous posts, distance estimated rendering requires that we are able to calculate a lower bound to the distance from every point in space to our fractal surface! A first, this might seem impossible. But it turns out such a formula already was known for the 2D Mandelbrot set. A distance estimate can be found as:

(1)   $$DE=0.5*ln(r)*r/dr$$

Where ‘r’ is the escape time length, and ‘dr’ is the length of the running derivative. (The approximation is only exact in the limit where the number of iterations goes to infinity)

In order to define what we mean by the running derivative, we need a few extra definitions. For Mandelbrot sets, we study the sequence $$z_{n+1} = z_{n}^2+c$$ for each point c. Let us introduce the function $$f_n(c)$$, defined as the n’th entry for the sequence for the point c. By this definition, we have the following defining formula for the Mandelbrot set:

$$f_n(c) = f_{n-1}^2(c) + c, f_0(c) = 0$$

Deriving this function with respect to c, gives

(2)   $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1$$   (for Mandelbrot formula)

Similar, the Julia set is defined by choosing a fixed constant, d, in the quadratic formula, using c only as the first entry in our sequence:

$$f_n(c) = f_{n-1}^2(c) + d, f_0(c) = c$$

Deriving this function with respect to c, gives

(3)   $$f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)$$   (for Julia set formula)

which is almost the same result as for the Mandelbrot set, except for unit term. And now we can define the length of $$f_n$$, and the running derivative $$f’_n$$:

$$r = |f_n(c)|$$ and $$dr = |f’_n(c)|$$

used in the formula (1) above. This formula was found by Douady and Hubbard in a 1982 paper (more info).

2D Julia set rendered using a distance estimator approach. This makes it possible to emphasize details, without having to use extensive oversampling.

Due to a constraint in WordPress, this post has reached its maximum length. The next post continues the discussion, and shows how the formula above can be used for other types of fractals than the 2D Mandelbrot.

# Distance Estimated 3D Fractals (III): Folding Space

The previous posts (part I, part II) introduced the basics of rendering DE (Distance Estimated) systems, but left out one important question: how do we create the distance estimator function?

## Drawing spheres

Remember that a distance estimator is nothing more than a function, that for all points in space returns a length smaller than (or equal to) the distance to the closest object. This means we are safe to march at least this step length without hitting anything – and we use this information to speed up the ray marching.

It is fairly easy to come up with distance estimators for most simple geometric shapes. For instance, let us start by a sphere. Here are three different ways to calculate the distance from a point in space, p, to a sphere with radius R:

(1) DE(p) = max(0.0, length(p)-R)  // solid sphere, zero interior
(2) DE(p) = length(p)-R // solid sphere, negative interior
(3) DE(p) = abs(length(p)-R)    // hollow sphere shell


From the outside all of these look similar. But (3) is hollow – we would be able to position the camera inside it, and it would look different if intersected with other objects.

What about the first two? There is actually a subtle difference: the common way to find the surface normal, is to sample the DE function close to the camera ray/surface intersection. But if the intersection point is located very close to the surface (for instance exactly on it), we might sample the DE inside the sphere. And this will lead to artifacts in the normal vector calculation for (1) and (3). So, if possible use signed distance functions. Another way to avoid this, is to backstep along the camera ray a bit before calculating the surface normal (or to add a ray step multiplier less than 1.0).

From left to right: Sphere (1), with normal artifacts because the normal was not backstepped. Sphere (2) with perfect normals. Sphere (3) drawn with normal backstepping, and thus perfect normals. The last row shows how the spheres look when cut open.

Notice that distance estimation only tells the distance from a point to an object. This is in contrast to classic ray tracing, which always is about finding the distance from a point to a given object along a line. The formulas for ray-object intersection in classic ray tracing are thus more complex, for instance the ray-sphere intersection involves solving a quadratic equation. The drawback of distance estimators is that multiple ray steps are needed, even for simple objects like spheres.

## Combining objects

Distance fields have some nice properties. For instance, it is possible to combine two distance fields using a simple minimum(a,b) operator. As an example we could draw the union of two spheres the following way:

DE(p) = min( length(p)-1.0 , length(p-vec3(2.0,0.0,0.0))-1.0 );


This would give us two spheres with unit radius, one centered at origo, and another at (2,0,0). The same way it is possible to calculate the intersection of two objects, by taking the maximum value of the fields. Finally, if you are using signed distance functions, it is possible to subtract one shape from another by inverting one of the fields, and calculating the intersection (i.e. taking max(A, -B)).

So now we have a way to combine objects. And it is also possible to apply local transformations, to get interesting effects:

This image was created by combining the DE’s of a ground plane and two tori while applying a twisting deformation to the tori.

Rendering of (non-fractal) distance fields is described in depth in this paper by Hart: Sphere Tracing: A Geometric Method for the Antialiased Ray Tracing of Implicit Surfaces. This paper also describes distance estimators for various geometric objects, such as tori and cones, and discuss deformations in detail. Distance field techniques have also been adopted by the demoscene, and Iñigo Quilez’s introduction contains a lot of information. (Update: Quilez has created a visual reference page for distance field primitives and transformations)

## Building Complexity

This is all nice, but even if you can create interesting structures, there are some limitations. The above method works fine, but scales very badly when the number of distance fields to be combined increases. Creating a scene with 1000 spheres by finding the minimum of the 1000 fields would already become too slow for real-time purposes. In fact ordinary ray tracing scales much better – the use of spatial acceleration structures makes it possible for ordinary ray tracers to draw scenes with millions of objects, something that is far from possible using the “find minimum of all object fields” distance field approach sketched above.

But fractals are all about detail, and endless complexity, so how do we proceed?

It turns out that there are some tricks, that makes it possible to add complexity in ways that scales much better.

First, it is possible to reuse (or instance) objects using e.g. the modulo-operator. Take a look at the following DE:

float DE(vec3 z)
{
z.xy = mod((z.xy),1.0)-vec3(0.5); // instance on xy-plane
return length(z)-0.3;             // sphere DE
}


Which generates this image:

Now we are getting somewhere. Tons of detail, at almost no computational cost. Now we only need to make it more interesting!

## A Real Fractal

Let us continue with the first example of a real fractal: the recursive tetrahedron.

A tetrahedron may be described as a polyhedron with vertices (1,1,1),(-1,-1,1),(1,-1,-1),(-1,1,-1). Now, for each point in space, lets us take the vertex closest to it, and scale the system by a factor of 2.0 using this vertex as center, and then finally return the distance to the point where we end, after having repeated this operation. Here is the code:

float DE(vec3 z)
{
vec3 a1 = vec3(1,1,1);
vec3 a2 = vec3(-1,-1,1);
vec3 a3 = vec3(1,-1,-1);
vec3 a4 = vec3(-1,1,-1);
vec3 c;
int n = 0;
float dist, d;
while (n < Iterations) {
c = a1; dist = length(z-a1);
d = length(z-a2); if (d < dist) { c = a2; dist=d; }
d = length(z-a3); if (d < dist) { c = a3; dist=d; }
d = length(z-a4); if (d < dist) { c = a4; dist=d; }
z = Scale*z-c*(Scale-1.0);
n++;
}

return length(z) * pow(Scale, float(-n));
}


Which results in the following image:

Our first fractal! Even though we do not have the infinite number of objects, like the mod-example above, the number of objects grow exponentially as we crank up the number of iterations. In fact, the number of objects is equal to 4^Iterations. Just ten iterations will result in more than a million objects - something that is easily doable on a GPU in realtime! Now we are getting ahead of the standard ray tracers.

## Folding Space

But it turns out that we can do even better, using a clever trick by utilizing the symmetries of the tetrahedron.

Now, instead of scaling about the nearest vertex, we could use the mirror points in the symmetry planes of the tetrahedron, to make sure that we arrive at the same "octant" of the tetrahedron - and then always scale from the vertex it contains.

The following illustration tries to visualize this:

The red point at the top vertex is the scaling center at (1,1,1). Three symmetry planes of the tetrahedron have been drawn in red, green, and blue. By mirroring points if they are on the wrong side (the non-white points) of plane, we will ensure they get mapped to the white "octant". The operation of mirroring a point, if it is on one side of a plane, is called a 'folding operation' or just a fold.

Here is the code:

float DE(vec3 z)
{
float r;
int n = 0;
while (n < Iterations) {
if(z.x+z.y<0) z.xy = -z.yx; // fold 1
if(z.x+z.z<0) z.xz = -z.zx; // fold 2
if(z.y+z.z<0) z.zy = -z.yz; // fold 3
z = z*Scale - Offset*(Scale-1.0);
n++;
}
return (length(z) ) * pow(Scale, -float(n));
}


These folding operations shows up in several fractals. A fold in a general plane with normal n can be expressed as:

float t = dot(z,n1); if (t<0.0) { z-=2.0*t*n1; }


or in a optimized version (due to AndyAlias):

z-=2.0 * min(0.0, dot(z, n1)) * n1;


Also notice that folds in the xy, xz, or yz planes may be expressed using the 'abs' operator.

That was a lot about folding operations, but the really interesting stuff happens when we throw rotations into the system. This was first introduced by Knighty in the Fractal Forum's thread Kaleidoscopic (escape time) IFS. The thread shows recursive versions of all the Platonic Solids and the Menger Sponge - including the spectacular forms that arise when inserting rotations and translations into the system.

The Kaleidoscopic IFS fractals are in my opinion some of the most interesting 3D fractals ever discovered (or created if you are not a mathematical platonist). Here are some examples of forms that may arise from a system with icosahedral symmetry:

Here the icosahedral origin might be evident, but it possible to tweak these structures beyond any recognition of their origin. Here are a few more examples:

Knighty's fractals are composed using a small set of transformations: scalings, translations, plane reflections (the conditional folds), and rotations. The folds are of course not limited to the symmetry planes of the Platonic Solids, all planes are possible.

The transformations mentioned above all belong to the group of conformal (angle preserving) transformations. It is sometimes said (on Fractal Forums) that for 'true' fractals the transformations must be conformal, since non-conformal transformations tend to stretch out detail and create a 'whipped cream' look, which does not allow for deep zooms. Interestingly, according to Liouville's theorem there are not very many possible conformal transformations in 3D. In fact, if I read the theorem correctly, the only possible conformal 3D transformations are the ones above and the sphere inversions.

Part IV discusses how to arrive at Distance Estimators for the fractals such as the Mandelbulb, which originates in attempts to generalize the Mandelbrot formula to three dimensions: the so-called search for the holy grail in 3D fractals.

# Distance Estimated 3D Fractals (II): Lighting and Coloring

The first post discussed how to find the intersection between a camera ray and a fractal, but did not talk about how to color the object. There are two steps involved here: setting up a coloring scheme for the fractal object itself, and the shading (lighting) of the object.

Since we are raymarching our objects, we can use the standard lighting techniques from ray tracing. The most common form of lightning is to use something like Blinn-Phong, and calculate approximated ambient, diffuse, and specular light based on the position of the light source and the normal of the fractal object.

### Surface Normal

So how do we obtain a normal of a fractal surface?

A common method is to probe the Distance Estimator function in small steps along the coordinate system axis and use the numerical gradient obtained from this as the normal (since the normal must point in the direction where the distance field increase most rapidly). This is an example of the finite difference method for numerical differentiation. The following snippet shows how the normal may be calculated:

vec3 n = normalize(vec3(DE(pos+xDir)-DE(pos-xDir),
DE(pos+yDir)-DE(pos-yDir),
DE(pos+zDir)-DE(pos-zDir)));


The original Hart paper also suggested that alternatively, the screen space depth buffer could be used to determine the normal – but this seems to be both more difficult and less accurate.

Finally, as fpsunflower noted in this thread it is possible to use Automatic Differentiation with dual numbers, to obtain a gradient without having to introduce an arbitrary epsilon sampling distance.

### Ambient Occlusion

Besides the ambient, diffuse, and specular light from Phong-shading, one thing that really improves the quality and depth illusion of a 3D model is ambient occlusion. In my first post, I gave an example of how the number of ray steps could be used as a very rough measure of how occluded the geometry is (I first saw this at Subblue’s site – his Quaternion Julia page has some nice illustrations of this effect). This ‘ray step AO‘ approach has its shortcomings though: for instance, if the camera ray is nearly parallel to a surface (a grazing incidence) a lot of steps will be used, and the surface will be darkened, even if it is not occluded at all.

Another approach is to sample the Distance Estimator at points along the normal of the surface and use this information to put together a measure for the Ambient Occlusion. This is a more intuitive method, but comes with some other shortcomings – i.e. new parameters are needed to control the distance between the samplings and their relative weights with no obvious default settings. A description of this ‘normal sampling AO‘ approach can be found in Iñigo Quilez’s introduction to distance field rendering.

In Fragmentarium, I’ve implemented both methods: The ‘DetailAO’ parameter controls the distance at which the normal is sampled for the ‘normal sampling AO’ method. If ‘DetailAO’ is set to zero, the ‘ray step AO’ method is used.

### Other lighting effects

Besides Phong shading and ambient occlusion, all the usual tips and tricks in ray tracing may be applied:

Glow – can be added simply by mixing in a color based on the number of ray steps taken (points close to the fractal will use more ray steps, even if they miss the fractal, so pixels close to the object will glow).

Fog – is also great for adding to the depth perception. Simply blend in the background color based on the distance from the camera.

Hard shadows are also straight forward – check if the ray from the surface point to the light source is occluded.

Soft shadows: Iñigo Quilez has a good description of doing softened shadows.

Reflections are pretty much the same – reflect the camera ray in the surface normal, and mix in the color of whatever the reflected ray hits.

The effects above are all implemented in Fragmentarium as well. Numerous other extensions could be added to the raytracer: for example, environment mapping using HDRI panoramic maps provides very natural lighting and is easy to apply for the user, simulated depth-of-field also adds great depth illusion to an image, and can be calculated in reasonable time and quality using screen space buffers, and more complex materials could also be added.

## Coloring

Fractal objects with a uniform base color and simple colored light sources can produce great images. But algorithmic coloring is a powerful tool for bringing the fractals to life.

Algorithmic color use one or more quantities, determined by looking at the orbit or the escape point or time.

Orbit traps is a popular way to color fractals. This method keeps track of how close the orbit comes to a chosen geometric object. Typical traps include keeping track of the minimum distance to the coordinate system center, or to simple geometric shapes like planes, lines, or spheres. In Fragmentarium, many of the systems use a 4-component vector to keep track of the minimum distance to the three x=0, y=0, and z=0 planes and to the distance from origo. These are mapped to color through the X,Y,Z, and R parameters in the ‘Coloring’ tab.

The iteration count is the number of iterations it takes before the orbit diverges (becomes larger than the escape radius). Since this is an integer number it is prone to banding, which is discussed later in this post. One way to avoid this is by using a smooth fractional iteration count:

float smooth =  float(iteration)
- log(log(dot(z,z)))/log(Scale);


(For a derivation of this quantity, see for instance here)

Here ‘iteration’ is the number of iterations, and dot(z,z) is the square of the escape time length. There are a couple of things to notice. First, the formula involves a characteristic scale, referring to the scaling factor in the problem (e.g. 2 for a standard Mandelbrot, 3 for a Menger). It is not always possible to obtain such a number (e.g. for Mandelboxes or hybrid systems). Secondly, if the smooth iteration count is used to lookup a color in a palette, offset may be ignored, which means the second term can be dropped. Finally, which ‘log’ functions should be used? This does not matter if only they are used consistently: since all different log functions are proportional, the ratio of two logs does not depend on the base used. For the inner logs (e.g. log(dot(,z))), changing the log will result in a constant offset to the overall term, so again this will just result in an offset in the palette lookup.

The lower half of this image use a smooth iteration count.

### Conditional Path Coloring

(I made this name up – I’m not sure there is an official name, but I’ve seen the technique used several times in Fractal Forums posts.)

Some fractals may have conditional branches inside their iteration loop (sometimes disguised as an ‘abs’ operator). The Mandelbox is a good example: the sphere fold performs different actions depending on whether the length of the iterated point is smaller or larger than a set threshold. This makes it possible to keep track of a color variable, which is updated depending on the path taken.

Many other types of coloring are also possible, for example based on the normal of the surface, spherical angles of the escape time points, and so on. Many of the 2D fractal coloring types can also be applied to 3D fractals. UltraFractal has a nice list of 2D coloring types.

### Improving Quality

Some visual effects and colorings, are based on integer quantities – for example glow is based on on the number of ray steps. This will result in visible boundaries between the discrete steps, an artifact called banding.

The smooth iteration count introduced above is one way to get rid of banding, but it is not generally applicable. A more generic approach is to add some kind of noise into the system. For instance, by scaling the length of the first ray step for each pixel by a random number, the banding will disappear – at the cost of introducing some noise.

Personally, I much prefer noise to banding – in fact I like the noisy, grainy look, but that is a matter of preference.

Another important issue is aliasing: if only one ray is traced per pixel, the image is prone to aliasing and artifacts. Using more than one sample will remove aliasing and reduce noise. There are many ways to oversample the image – different strategies exist for choosing the samples in a way that optimizes the image quality and there are different ways of weighting (filtering) the samples for each pixel. Physical Based Rendering has a very good chapter on sampling and filtering for ray tracing, and this particular chapter is freely available here:

In Fragmentarium there is some simple oversampling built-in – by setting the ‘AntiAlias’ variable, a number of samples are chosen (on a uniform grid). They are given the same weight (box filtered). I usually only use this for 2D fractals – because they render faster, which allows for a high number of samples. For 3D renders, I normally render a high resolution image, and downscale it in a image editing program – this seems to create better quality images for the same number of samples.

Part III discusses how to derive and work with Distance Estimator functions.