Reaction-Diffusion Systems

Reaction-diffusion systems model the spatial dynamics of chemicals. An interesting early application was Alan Turing’s theory of Morphogenesis (Turing’s 1951 paper). Here, he suggested, that the pattern formation in animal skin could be explained by a two component reaction-diffusion system.

Reaction-diffusion systems are interesting, because they display a wide range of self-organizing patterns, and they have been used by several digital artists, both for 2D pattern generation and 3D structure generation.

The reaction-diffusion model is a great example of how complex large-scale structure may emerge from simple, local rules.

Modelling Reaction-Diffusion on a GPU

As the name suggests, these systems have two driving components: diffusion, which tends to spread out or smoothen concentrations, and reactions, which describe how chemical species may transform into each other.

For each chemical species, it is possible to describe the evolution using a differential equation on the form:

\(\frac {dA}{dt} = K \nabla^2 A + P(A,B)\)
 
Where A and B are fields describing the concentration of a chemical species at each point in space. The \(K\) coefficient determines how quickly the concentration spreads out, and \(P(A,B)\) is a polynomial in the different species concentrations in the system. There will be a similar equation for the B field.

To model these, we can represent the concentrations on a discrete grid, which fits nicely on a 2D texture on a GPU. The time derivative can solved in discrete time steps using forward Euler integration (or something more powerful). On a GPU, we need two buffers to do this: we render the next time step into the front buffer using values from the back buffer, and then swap the buffers.

Buffer swapping is a standard technique, and in Fragmentarium the only thing you need to do, is to declare a ‘uniform sampler2D backbuffer;’ and Fragmentarium will take care of creation and swapping of buffers. We also use the Fragmentarium host define ‘#buffer RGBA32F’ to ask for four-component 32-bit float buffers, instead of the normal 8-bit integer buffers.

The Laplacian may be calculated using a finite differencing scheme, for instance using a five-point stencil:

vec3 P = vec3(pixelSize, 0.0); 

// Five point stencil Laplacian
vec4  laplacian5() {
	return 
	+  texture2D( backbuffer, position - P.zy)
	+  texture2D( backbuffer, position - P.xz) 
	-  4.0 * texture2D( backbuffer,  position )
	+ texture2D( backbuffer,  position + P.xz )
	+ texture2D( backbuffer,  position +  P.zy );
}

(see the Fragmentarium source for a nine-point stencil).

A simple two-component Gray-Scott system may then be modelled simply as:

// time step for Gray-Scott system:
vec4 v = texture2D(backbuffer, position);
vec2 lv = laplacian5().xy; // laplacian
float xyy = v.x*v.y*v.y;   // utility term
vec2 dV = vec2( Diffusion.x * lv.x - xyy + f*(1.-v.x), Diffusion.y * lv.y + xyy - (f+k)*v.y);
v.xy += timeStep*dV;

(Robert Munafo has a great page with more information on Gray-Scott systems).

Here is an example of a typical system created using the above system, though many other patterns are possible:

It is also possible to enforce some structure by changing the concentrations in certain regions:

You can even use a picture to modify the concentrations:

A template implementation can be found as part of the Fragmentarium source at GitHub: Reaction-Diffusion.frag. Notice, that this fragment requires a recent source build from the GitHub repository to run.

Reaction-Diffusion systems used by artists

Several artist have used Reaction Diffusion systems in different ways, but the most impressive examples of 2D images I have seen, are the works of Jonathan McCabe. For instance his Bone Music series: or his Turing Flow series:

McCabe’s images are created using a more complex multi-scale model. Softology’s blog entry and W:Blut’s post dissect McCabe’s approach (there is even a reference implementation in Processing). Notice, that Nervous System sells some of McCabe’s works as jigsaw puzzles.

Reaction-Diffusion systems in WebGL

Felix Woitzel (@Flexi23) has created some beautiful WebGL-based reaction-diffusion demos, such as this Fluid simulation with Turing patterns:


He also has created several other RD based variants over at WebGL Playground.

Fabricated 3D Objects

Jessica Rosenkrantz and Jesse Louis-Rosenberg at Nervous System create and sell objects designed and inspired by generative processes. Several of their objects, including these cups, plates, and lamps are based on reaction-diffusion systems, and can be bought from their webshop.

Be sure to read their blog entries about reaction-diffusion. And don’t forget to take a look at their Cell Cycle WebGL design app, while visiting.

Reaction-Diffusion Software

An easy way to explore reaction-diffusion systems with doing any coding is by using Ready, which uses OpenCL to explore RD systems. It has several interesting features, including the ability to run systems on 3D meshes and directly interact and ‘paint’ on the surfaces.

It also lets you run Game-of-Life on exotic geometries, such as a torus or even something as exotic as a Penrose tiling.

Double Precision in OpenGL and WebGL

This post talks about double precision numbers in OpenGL and WebGL, and how to emulate them if there is no native hardware support.

In GLSL 4.00.9 (which is OpenGL 4.0) and higher, there is a native double precision floating point type. And if your graphics card is able to run OpenGL 4.0, it most likely has native hardware support for doubles (except for a few ATI/AMD cards). There are some caveats, though:

  1. Not all functions are supported with double precision arguments. For instance, there are no trigonometric and exponential functions. (The available functions may be found here).
  2. You can not pass double precision ‘varying’ parameters from the vertex shader to the fragment shader, and have the GPU automatically interpolate them. Double precision varying variables must be flat.
  3. Double precision performance may be artificially limited by the hardware manufacturers. This is the case for Nvidia’s Fermi architecture, where the scientific computing brand, the Tesla series, can execute double precision arithmetics at half the speed of single precision, while the consumer brand, the GeForce series, only can execute double precision arithmetics at 1/8 the speed of single precision. For Nvidia’s brand new Kepler architecture used in the GeForce 600 series, things change again: here the difference between single and double precision will be a whopping factor 24! Notice, that this will also be the case for some cards in the Kepler Tesla branch, such as the Tesla K10.
  4. In Fragmentarium (and in general, in Qt’s OpenGL wrapper classes) it is not possible to set double precision uniforms. This should be easy to circumvent by using the OpenGL API directly, though.


(Non-related Fragmentarium image)

In order to use double precision, you must either specify a GLSL version 4.00 (or higher) or use the extension:

#extension GL_ARB_gpu_shader_fp64 : enable

Older cards, like the GeForce 310M in my laptop, does not support double precision in hardware. Here it is possible to use emulated double precision instead.

I used the functions by Henry Thasler described here in his posts, to emulate a double precision number stored in two single precision floats. The worst part about doing emulated doubles in GLSL, is that GLSL does not support operator overloading. This means the syntax gets ugly for simple arithmetics, e.g. ‘z = add(mul(z1,z2),z3)’ instead of ‘z = z1*z2+z3’.

On Nvidia cards, it is necessary to turn off optimization to use Thasler’s code – this can be done using the following pragmas:

#pragma optionNV(fastmath off)
#pragma optionNV(fastprecision off)


(Non-related Fragmentarium image)

Performance

To test performance, I used a Mandelbrot test scene, rendered at 1000×500 with 1000 iterations in Fragmentarium. The numbers show the performance in frames per second. The zoom factor was determined visually, by noticing when pixelation occurred.

Geforce 570GTX Tesla 2075 Max Zoom
(~300USD) (~2200USD)
Single 140 100 10^5
Double 41 70 10^14
Emulated Double 16 11 10^13

Some observations:

  • Emulated double precision is slightly less accurate then true hardware doubles, but not much in this particular scenario.
  • Emulated doubles are roughly 1/9th the speed of single precision. Amazingly, this suggest that on the Kepler architecture it might make more sense to use emulated double precision than the built-in hardware support!
  • Hardware doubles on the 570GTX performs better than expected (they should perform at roughly 1/8 the speed). This is probably because double precision arithmetics isn’t the only bottleneck in the shader.

Notice that the Tesla card was running on Windows in WDDM mode, not TCC mode (since you cannot use GLSL shaders in TCC mode). Not that I think performance would change.

WebGL and double precision

WebGL does not support double precision in its current incarnation. This might change in the future, but currently the only choice is to emulate them. This, however, is problematic since WebGL seems to strip away pragmas! Henry Thasler’s emulation code doesn’t work under the ANGLE layer either. In fact, the only configuration I could get to work, was on a Intel HD 3000 GPU with ANGLE disabled. I did create a sample application to test this which can be tried out here:
Click to run WebGL app. Left side is single-precision, right side is emulated double precision. Here shown on Firefox without ANGLE on a Intel HD 3000 card.

It is not clear why the WebGL version does not work on Nvidia cards. Floating points may run at lower resolution in WebGL, but I’m using the ‘precision highp’ qualifiers. I also tried querying the resolution using glContext.getShaderPrecisionFormat(…), but had no luck – it is only available on Firefox, and on my GPU’s it just returns precision=0.

The most likely explanation is that Nvidia drivers perform some optimizations which spoils the emulation code. This is also the case for desktop OpenGL, but here the pragma’s solve the problem.

The emulation code uses constructs like:

z = a - (a - b);

which I suspect the well-meaning compiler might translate to ‘z=b’, since the rounding errors normally would be insignificant. Judging from some comments on Thasler’s original posts, it might be possible to prevent this using constructs such as: ‘z = a – float(a-b)’, but I have not pursued this.

Fragmentarium and Double Precision

Except that there are no double-precision sliders (uniforms), it is straight-forward to use double precision code in Fragmentarium. The only thing to remember is that you cannot pass doubles from the vertex shader to the fragment shader, which is the standard way of passing camera information to the shader in Fragmentarium.

I’ve also included a small port of Thaslers GLSL code in the distribution (see “Include/EmulatedDouble.frag”). It is quite easy to use (for an example, try the included “Theory/Mandelbrot – Emulated Doubles.frag”).

The Map and the Territory

In Michel Houellebecq’s novel The Map and the Territory, the (fictional) principal character gains some fame by creating a series of photographs of Michelin road maps. Some of the images are described in detail: in particular one taken of a map near the village of Châtelus-le-Marcheix.

I decided to recreate this (using Fragmentarium to simulate the camera optics). And of course I couldn’t resist to add a few singularities to the projection model.

As for the maps, I used the online version of Michelin’s road maps taken from the same region – hopefully this qualifies as fair use.

Click the images to see a larger version – these images work better when viewed large.

WebGL for Shader Graphics

Web applications are becoming popular, not at least because of Google’s massive effort to push everything through the browser (with Chrome OS being the most extreme example, where everything is running through a browser interface).

Before WebGL, the only way to create efficient graphics was through plug-ins, such as Adobe’s Flash, Microsoft’s Silverlight, Unity, or Google’s O3D and Native Client. But WebGL is a vendor independent technology, directly integrated with the browser’s JavaScript language and DOM model.

Unfortunately, WebGL browser support is limited. WebGL is not available in Internet Explorer on Windows, and is not enabled by default in Safari on Mac OS X. This means that roughly 50% of all internet users won’t have access to WebGL content. WebGL is not supported on iOS devices either (even though it is accessible for iAds, and can be enabled on jail-broken devices).

What is worse, is that Microsoft do not even plan to support WebGL, since they consider it a security threat. Their concerns are reasonable, but their solution is not: it would be much better if they simply showed a dialog box message, warning the user that executing WebGL provides a security risk, and giving a choice to continue or not – the same way they warn about plugins and downloaded executables.

Some very impressive stuff has been done using WebGL, though: for instance ro.me, Path Tracing (Evan Wallace) , Cars (Altered Qualia), Terrain Editor (Rob Chadwick), Traveling Wavefronts (Felix Woitzel), Hartverdrahtet.

Using WebGL for Fractals

There are already some great tools available for experimenting with WebGL: ShaderToy, GLSLSandbox, WebGL Playground. Their main weakness is that it is difficult to store state information (for instance, if you want a movable camera), since this cannot be done in the shader itself, without using weird hacks. So, I decided to start out from scratch to get a feeling for WebGL.

WebGL (specification) is a JavaScript API based on OpenGL ES 2.0, a subset of the desktop OpenGL version designed for embedded devices such as cell phones.

Being a ‘modern’ OpenGL implementation, there is no support for fixed pipeline rendering: there is no matrix stack, no default shaders, no immediate mode rendering (you cannot use glBegin(…) – instead you must use vertex buffers). WebGL also misses some of more advanced features of the desktop OpenGL version, such as 3D textures, multiple render targets, and double precision support. And float texture support is an optional extension.

The first example I made was this Mandelbrot viewer: It demonstrates how to initialise WebGL and compile shaders, render a full-canvas quad, and process keyboard and mouse events and pass them through uniforms to the fragment shader.
Click the image to try out the WebGL demo.

A few programming comments. First JavaScript: I’m not very fond of JavaScript’s type system. The loose typing means that you risk finding bugs later, at run-time, instead of when compiling. It also means that it can be hard to read third-party code (which kind of parameters are you supposed to provide to a function like ‘update(ev, ui)’?). As for numerical types, JavaScript only has the Number type: an IEEE 754 double precision type – no integers!. Some browsers also silently ignore errors during run-time, which makes it even harder to find bugs. On the positive side is the quick iteration time, and the Firebug Firefox plugin, which is an extremely powerful tool for debugging web and JavaScript code.

As for the HTML, I still find it difficult to do table-less layout using floating div’s and css. I’m missing the flexible layout managers that many desktop UI kits provide, which makes it easy to align components and control how they scale when resized (but I may be biased towards desktop UI’s). Also, as HTML was not designed with UI widgets in mind, you have to use a third-party library to display a simple slider: I chose jQuery UI, which was easy to setup and use.

Finally the WebGL: The WebGL GLSL shader code is very similar to the desktop GLSL dialect. The biggest difference is the way loops are handled. Only ‘for’ loops are available, and with a very restricted syntax. It seems the iteration count must be determinable at compilation time (probably because some implementations unroll all loops), which means you no longer can use uniforms to control the loops (you can, however, ‘break’ out of loops dynamically based on run-time variables). This means, that in order to pass the iteration count and number of samples to the Mandelbrot shader, I have to do direct text substitutions in the shader code and recompile.

But my biggest frustation was caused by the ANGLE translation layer. Even for this very simple example, I had several issues with ANGLE – see the notes below.

Feel free to use the example as a starting point for further experiments – it is quite simple to modify the 2D shader code.

Notes about ANGLE

A problem with WebGL is poor graphics driver support for OpenGL. Chrome and Firefox have chosen a very radical approach to solve this: on Windows, they convert all WebGL GLSL shader code into DirectX 9 HLSL code through a converter called ANGLE. Their rationale for doing this, is that OpenGL 2.0 drivers are not available on all computers. However, several shaders won’t run due to the ANGLE translation, and the compilation time can be extremely slow. Wrt drivers, older machines with integrated graphics might be affected, but anything with a less than five year old Nvidia, AMD, or Intel HD graphics card should work with OpenGL 2.0.

In my experiments above, I ran into a bug that in some cases make loops with more than 255 iterations fail (I’ve submitted a bug report).

When debugging ANGLE problems, a good first step is to disable ANGLE and test the shaders. In Chrome, this can be done by starting the executable with the comand line argument –use-gl=desktop. You can check your ANGLE version with the URL chrome://gpu-internals/. In Firefox use the about:config URL, and webgl.force-enabled=true and webgl.prefer-native-gl=true to disable ANGLE.

It is also possible to get the translated HLSL code using the WEBGL_debug_shaders extension. However, this extension is only available for privileged code, which means Chrome must be started with the command line parameter –enable-privileged-webgl-extensions. After that the HLSL source can be obtained by calling:

var hlsl = gl.getExtension("WEBGL_debug_shaders").getTranslatedShaderSource(fragmentShader)

I still haven’t found an workaround for this earlier Mandelbulb experiment (using GLSLSandbox), which fails with another ANGLE bug:
Click the image to try out the WebGL demo (fails on ANGLE systems).

But, I’ll try implementing it from scratch to see if I can find the bug.

Distance Estimated 3D Fractals (Part VIII): Epilogue

This is the last post in my introduction to distance estimated 3D fractals (see Part one for an overview). Originally, I intended this to be much shorter and more focused, but different topics kept sneaking up on me.

This final post discusses hybrid systems, and a few things that didn’t fit naturally in the previous posts. It also contains a small collection of links to relevant resources.

Hybrids

All the fractal systems mentioned in the previous parts apply the same transformation to each point for a number of iterations. But there is nothing that prevents applying different transformations at each iteration step. This has led to a number of hybrid systems, using building blocks from different fractals. They are very popular in Mandelbulb 3D, which comes with a huge library of transformations, which may be stringed together in a vast number of possible combinations.

Spudsville

It is difficult to trace the origin of many of these hybrids, since they are often cloned and modified. One of the more interesting base forms is the Spudsville system by Lenord (see also Hal Tenny’s tutorial on this system).

It is based on the following recipe:

5 x { Mandelbox, i.e. BoxFold, SphereFold, Scaling, Offset }
50 x {BoxFold, Mandelbulb power-2 Squaring }

Pseudo Kleinian

This is another popular base form, based on parameters from Theli-at’s Kleinian Drops. It is based on this formula:

12 x { Scale -1 Mandelbox }
1 x {BoxFold, Mandelbulb power-2 Squaring }
400 x { Scale 2 Mandelbox  }

A version of a similar system is available in Fragmentarium as “Knighty Collection/PseudoKleinian.frag”:

It is also possible to throw some Menger structure into the mix (see “Knighty Collection/PseudoKleinianMenger.frag”):

It is a very diverse system: this is the same formula, that I used as a base form for both Time Pieces:

and Pseudo-Kleinian Blue:

There really is no end to the possibilities. Here is another example:

where an octahedral symmetry transformation has been substituted in a Spudsville-like system:

7 x { Mandelbox, i.e. BoxFold, SphereFold, Scaling, Offset }
7 x { Octahedral, Mandelbulb power-2 Squaring } 

The question is how to construct a suitable distance estimator for these hybrids systems. There is no easy answer to this. Mandelbulb3D and Mandelbulber both use the numerical gradient approximation discussed in part V of this series.

If the system is composed only of conformal transformations, the scalar approach discussed in part VI will be sufficient.

But for general combinations there is no easy way: it is often possible to guess a decent distance estimator, but more often than not, the analytic distance estimator overshoots and needs to be compensated by a fudge factor.

Interior renderings

The Mandelbrot distance estimation formula discussed in part V is only valid for exterior distances. There also exists a formula for the interior distance (for the 2D case), but it is much more complex than the exterior one, since it requires detecting cycles in the orbit.

However, in some cases the exterior distance estimate (or the absolute value of it), also works as an interior estimate (thanks to Visual for pointing this out). Here is an example of the interior of a Mandelbulb:

Geometric Orbit Trapping

Orbit trapping is often used to color fractals. During the orbit calculation the minimum distance to various geometric objects is stored (often the center, a sphere shell, or the x,y, and z-planes).

But it is also possible to use orbit traps to define the geometry of the fractals. Here is a standard Kaleidoscopic IFS like system, defined by DE such as:

float DE(vec3 z)
{
	int n = 0;
	while (n++ < Iterations) {
		// ...do some transformations here
		n++;
	}
	return length(z)*pow(Scale, float(-n));
}

resulting in an image like this:

but by inserting a trap-function and keeping the minimum value, we can create some interesting geometric variations:

float DE(vec3 z)
{
	int n = 0;
	float d= 1000.0;
	while (n++ < Iterations) {
		// ...do some transformations here
		n++;
		d = min(d,  trap(z) * pow(Scale, float(-n)));
	}
	return d;
}

for instance, using a cylinder-function for trap(z) results in an image like this:

Heightmap renderings

It is also possible to use distance estimated methods to draw heightmaps of fractals, e.g.:


Included in Fragmentarium as 'Knighty Collection/MandelbrotHeightField.frag'

Or use heightmaps to visualize the algebraic structure of poles and zeroes in the complex plane:


Included in Fragmentarium as 'Experimental/LiftedDomainColoring3D.frag'

Heightmaps can also be generated from Perlin Noise, to create more realistic terrains:


Included in Fragmentarium as 'Experimental/Terrain.frag'

Knots, Polytopes, and Honeycombs

It is also possible to use distance estimation techniques to depict other mathematical structures than fractals. I've written about them before, but Knighty has explored DE's for knots and polyhedra:

for four-dimensional polychora:

and even for hyperbolic honeycombs:

(There are several examples included with Fragmentarium)

Resources

Software

The easiest way to start exploring 3D fractals is probably by trying Mandelbulb 3D or Mandelbulber. Both are very powerful and feature-rich applications.

Mandelbulb 3D (by Jesse) is probably the most used 3D fractal creation tool (judging by pictures posted at Fractal Forums). It contains many different formulas and fragments, which can be combined as hybrids. It is free, closed-source, CPU-based, and Windows only.

Mandelbulber (by Buddhi) is open source, and available for Windows, Linux, and Mac. CPU-based, but with OpenCL preview!

GPU Based renderers

Fragmentarium is my own playground for working with GPU (GLSL) based pixel graphics. It is meant to create a modular and interactive environment for working with 2D and 3D graphics. All the images in this series of blog post were made with Fragmentarium, and many of the systems are included as examples.

Rrrola's Boxplorer is a fast interactive Mandelbox explorer. It has been extended by Marius Schilder in Boxplorer2 to include spline animations, stereo view, and many examples of fractal systems.

Subblue's Pixel Bender Mandelbulb script was one of the first GPU implementations. He has made many great fractal animations and images, so be sure to visit his web site. He also created the impressive Fractal Lab WebGL site, which made it possible to explore fractals directly in a browser (the site is currently under reconstruction)

Eiffie's Animandel Pro is a tool for creating fractal animations. It features a GLSL editor and even an integrated C-compiler for dynamically compiled CPU code. It is certainly not the easiest way to get started, but as can be seen from Eiffie's videos it is a powerful tool.

Web sites and papers

Fractal Forums is the place, where all the new development and discoveries can be followed. It's a treasure chest filled with information, but it can be difficult to find it in the archives. A good place to start is the original Mandelbox thread and the thread about DE's for the Mandelbox.

Daniel White’s Mandelbulb site is probably the best account of the history of this fractal. Also see Paul Nylander’s Hypercomplex systems.

Tom Lowe's Mandelbox site has a lot of information on the Mandelbox, collected by the person who discovered it himself.

Hypercomplex Iterations: Distance Estimation and Higher Dimensional Fractals (2002). by Dang, Kaufmann, and Sandin is a rare mathematical treatment of higher-dimensional fractals and their distance estimates. It is free (but tough!).

J. C. Hart's original paper Ray tracing deterministic 3-D fractals and his sphere tracing papers are must-reads. He has also written many other great papers.

Pouet.net is a web site for demo scene coders. There is a strong emphasis on heavily optimized and efficient code. Several demos features distance estimation and fractals.

In particular Iñigo Quílez has explored fractals and distance fields in a demo scene context. His Rendering Worlds With Two Triangles is a good introduction to distance field rendering. But be sure to check out Quilez's website - there is an abundance of good stuff, including lots of tutorials.

Fragmentarium 0.9.1 (“Chiaroscuro”) Released

I’ve released a new build of Fragmentarium, version 0.9.1 (“Chiaroscuro”).
It can be downloaded at Github (as of now only Windows builds are available).

The usual caveats apply: Fragmentarium is very much work in progress, and is best suited for people who like to experiment with code.

Dual buffers

The main new feature is support for dual buffers and dual shaders. The front buffer is swapped after each frame to the backbuffer, which can be accessed as: ‘uniform sampler2D backbuffer;’.

Buffers can be created as either 8-bit or 16-bit integer, or 32-bit float. The new buffers makes it possible to create accumulated ray tracing, high quality AA for 2D systems, and many types of feedback systems. The easiest way to start exploring these features is by looking at the new tutorials (see below).


Minor improvements:

  • Changed UI a bit to make it easier to change from automatic to continuous rendering.
  • Added context menu option to insert preset based on current settings.
  • The syntax for using ‘2D.frag’ is simpler now. Just implement: “vec3 color(vec2 c);”
  • Bugfix: Fixed error in 2D.frag, where changing aspect ratio would mess up viewport translation.
  • Bugfix: Fixed some errors in the included fragments: Noise, Tetrahedron, and several of Knighty’s examples were missing a ‘providesInit’.
  • Bugfix: Fixed specular bug in standard-raytracer.
  • Bugfix: Copying from web was sometimes weird (should now strip rich text).
  • Bugfix: Autosave files now creates a directory with output files (necessary since the #BufferShader directive broke the old ‘include all in one file’ system).


New fragments:

  • Added a new ‘tutorial’ category, with examples of many features in Fragmentarium.
  • Soft-Raytracer.frag – An example progressive (accumulated) ray-tracer. DOF using finite aperture, HDR and tonemapping, soft shadows, and multiple ray ambient occlusion, and sub-pixel jittered high-quality anti-alias. All very experimental.
  • Progressive2D.frag, Progressive2DJulia.frag – Can be used for high-quality (progressive) anti-alias of 2D systems. Uniform disk sampling, Gaussian and Box filtering, “gamma correct” averaging of samples.
  • A Quilez inspired ‘Domain Distortion’ example.
  • A dual-buffered Game of Life example.
  • Mandelbrot Averaged Stripe Coloring example.
  • Lifted Domain Coloring example (in 2D/3D), see here.
  • New ‘Theory’ category with examples of the dual number and automatic differentiation method.
  • Some great new scripts from Knighty, for polyhedrons, knots, polychora, and hyperbolic tesselations. See here and here.

ATI users

Some fragments fail on ATI cards. This seems to be due to faulty GLSL driver optimizations. A workaround is to lock the ‘iterations’ variable (click the padlock next to it). Adding a bailout check inside the main DE loop (e.g. ‘if (length(z)>1000.0) break;’) also seems to do the job. I don’t own an ATI card, so I cannot debug this without people helping.

Mac users

Some Mac users has reported problems with the last version of Fragmentarium. Again, I don’t own a Mac, so I cannot solve these issues without help.

Finally, please read the FAQ, before asking questions.

For more examples of images generated with the new version, take a look at the Flickr Fragmentarium stream.

Spherical Worlds

Recently I saw a description of spherical fractals in a blog post by Samuel Monnier.

These Julia-sets are constructed like ordinary Mandelbrots and Julias: first the argument is squared, but instead of adding a constant afterwards, a Möbius transformation is applied:

\(z = \frac{a z^2 + b}{c z^2 + d}\)

For the right choices of (complex) constants, plane-filling patterns appear.

There is an intimate connection between Möbius transformations and spherical geometry: if the plane is stereographically projected onto a sphere, a Möbius transformations corresponds to rotating and moving the sphere, and then project stereographically back to the plane (this is nicely visualized in this video).



This connection can be visualized graphically: if the plane-filling patterns are stereographically projected onto a sphere, they fit naturally on it. There are no discontinuities or voids, and no singularities near the poles.



Here I’ve used Fragmentarium to create some images of these plane-filling patterns, together with their stereographical projection onto a sphere. It was done by distance estimated ray marching, but in this case we could have used ordinary ray tracing, and calculated the exact intersections.



The Fragmentarium script can be found here.

Lifted Domain Coloring

This year, one of the pictures at the International Science & Engineering Visualization Challenge, caught my interest.

Poelke and Polthier’s Lifted Domain Coloring is a coloring scheme for visualizing properties of complex functions: it maps numbers in the complex plane stereographically to the Riemann sphere, and assigns a hue based on the inclination angle (though I’m not sure that much is gained by the stereographic projection, since the polar representation of the complex numbers seems to provide all the needed information). Saturation and Brightness are controlled by the modulus of the number: when the modulus goes towards infinity, the color turns white, and for numbers close to zero, the color turns black. The exact radial mapping used by the authors is not specified in the paper, but I think my implementation is quite close:

The visualization scheme makes it possible to visually identify different properties, such as zeroes and poles in complex functions.

One of the ways, I think such a visualization may be improved is by using a heightmap:

Here I’ve raised the poles and lowered the zeroes: first, I made the poles and zeroes appear symmetric, by transforming the modulus: r = abs(r + 1/r). Then I applied a sigmoid function to tame the infinities, and finally another sigmoid transformation was applied to change the sign of the zeroes. This technique will only work for somewhat well-behaved functions (meromorph functions – functions with a countable number of zeroes and poles).

Of course, I’ve also tested the Lifted Domain Coloring scheme on fractals.

Here is a Mandelbrot and Julia plot:

Usually Mandelbrot visualizations focus on coloring the outside of the set, but since the exterior of the Mandelbrot set has infinite modulus, only the interior (with its zeroes) are visualized here. The zeroes are visualized as peaks for better graphical clarity.

I also tested the coloring scheme on Samuel Monnier’s Ducks fractal:


Here, the coloring scheme does a decent job for low iteration counts, but for higher iterations the images become messy, so for pure aestethic purposes there are probably better coloring schemes around.

Distance Estimated Polychora

My last post mentioned some scripts that Knighty (a Fractal Forums member) made for distance estimated rendering of many types of polyhedra, including the Platonic solids. Shortly after, Knighty really raised the bar by finding a distance estimator for four dimensional polytopes. In this post, I’ll show some images of a subset of these, the convex regular polychora.

There are several ways to depict four dimensional structures. The 4D Quaternion Julia, one of the first distance estimated fractals, simply showed 3D slices of 4D space. Another way would be to project the shadow of a 4D object onto a 3D space. Ideally, a proper perspective projective would be preferable, but this seems to be complicatated with distance estimation techniques.

The technique Knighty used to create a 3D projection, was to place the polychoron boundary on a 3-sphere, and then stereographically project the 3-sphere surface onto a 3-dimensional space. For a very thorough and graphical introduction to stereographic projection of higher-dimensional polytopes, see this great movie: Dimensions. It should be noted that the Jenn3D program also uses this projection to depict a variety of polytopes (using polygonal rendering, not distance estimated ray marching).

Back to the structures:

The convex regular 4-polytopes are the four-dimensional analogs of the Platonic solids. They are bounded by three-dimensional cells, which are all Platonic solids of the same kind (similar to the way the Platonic solids are bounded by identical regular 2D polygons). The convex regular polytopes are consistently named by the number of identical cells (Platonic solids) that bounds them. In four dimensions, there are six of these, one more than the number of Platonic solids.

5-cell

The 5-cell (or 4-simplex, or hypertetrahedron) is the simplest of the convex regular polytopes. It is composed of 5 three-dimensional tetrahedrons, resulting in a total of 5 vertices and 10 edges. It is the 4D generalisation of a tetrahedron.

The curved lines are a consequence of the stereographic projection. In 4D the lines would be straight.

8-cell

The 8-cell (or hypercube, or Tesseract) is also simple. Composed of 8 3D-cubes, it has 16 vertices, and 32 edges. It is the 4D generalisation of a cube.

16-cell

Things start to get more complicated with the 16-cell (or hyperoctahedron). It is composed of 16 tetrahedrons, and has 8 vertices and 24 edges. It is the 4D generalisation of an octahedron.

Notice, that if we rotate the 3-sphere, we can get interesting depictions, with edges getting infinitely long in 3-space:

24-cell

The 24-cell is exceptional, since it has no 3D analogue. It is built from 24 octahedrons, has 24 vertices and 96 edges.

120-cell

The 120-cell is a beast, built from 120 three-dimensional dodecahedra. With 600 vertices and 1200 edges, it is the most complex of the convex, regular polychora. It is the 4D generalisation of a dodecahedron.

Zooming in, it is easier to see the pentagons making up the dodecahedra.

600-cell

Finaly, the 600-cell is built from an even larger number of polyhedra: 600 three-dimensional tetrahedrons. However, the simpler polyhedra means there is only a total of 120 vertices and 720 edges. It is the 4D generalisation of an icosahedron.

Knighty’s great Fragmentarium scripts can be found in this thread at Fractal Forums.

Knighty’s script is not limited to regular convex polychora. Many types of polytopes can be made. Here are the parameters used for the images in this post:

polychora06.frag parameters:

Type,U,V,W,T
3,0,1,0,0 - 5-cell
4,0,1,0,0 - 8-cell
4,0,0,1,0 - 24-cell
4,0,0,0,1 - 16-cell
5,0,1,0,0 - 120-cell
5,0,0,0,1 - 600-cell

And for completeness, here are the parameters for the 3D polyhedron script:

Type,U,V,W
3,0,1,0 Tetrahedron
4,0,0,1 Cube
3,1,0,0 Octahedron
5,0,0,1 Dodecahedron
5,0,1,0 Icosahedron