All posts by Mikael Hvidtfeldt Christensen

Assorted Links

Pixel Bender 3D

Adobe has announced Pixel Bender 3D:

If I understand it correctly, it is a new API for Flash, and not as such a direct extension of the Pixel Bender Toolkit. So what does it do?

As far as I can tell, it is simply a way to write vertex and fragment shader for Flash. While this is certainly nice, I think Adobe is playing catchup with HTML5 here – many browsers already support custom shaders through WebGL (in their development builds, at least). Or compare it to a modern 3D browser plugin such as Unity, with deferred lightning, depth-of-field, and occlusion culling…

And do we really need another shader language dialect?

Flash raytracer

Kris Temmerman (Neuro Productions) has created a raytracer in Flash, complete with ambient occlusion and depth-of-field:

Kris has also produced several other impressive works in Flash:

Chaos Constructions

Quite and Orange won the 4K demo at Chaos Constructions 2010 with the very impressive ‘CDAK’ demo:

(Link at Pouet, including executable).

Ex-Silico Fractals

This YouTube video shows how to produce fractals without a computer. I’ve seen video feedback before, but this is a clever setup using multiple projectors to create iterated function systems.

Vimeo Motion Graphics Award

‘Triangle’ by Onur Senturk won the Vimeo Motion Graphics Award. The specular black material looks good. Wonder if I could create something similar in Structure Synth’s internal raytracer?

Written Images

Written Images is a generative book with artworks from different artists. All images are unique, created on-demand, when a new copy of the book is printed. 70 applications were submitted (see the overview video), and a jury selected 42 of these for the final collection.

Some of the submissions that caught my eye:


First, I think Marcin Ignac’s Cindermedusae is an amazing work. He generates imaginary sea creatures in the style of Ernst Haeckel.

Cindermedusae was created in Cinder and OpenGL, and generates images in near-realtime (one of the requirements for Written Images was a maximum calculation time of 15 seconds).


W:Blut created Division for Written Images. It seems to be created using his interesting Hemesh library for Processing.

Origami Butterfly

Jonathan McCabe contributed with the Origami Butterfly. His images are created using an iterated folding process in 2D – which is interesting, because the Kaleidoscopic Iterated Functions Systems and the Mandelbox use a similar approach, but in 3D.

The Origami Butterfly process is described in a bit more detail at this post at Generator.x.

Jacob’s Cave

Jacob’s cave is made by Sansumbrella, created using Cinder. Intriguing complex shapes, yet very simple and elegant:

MeshLab with Structure Synth integration

MeshLab is a powerful open-source system for manipulation of 3D meshes. It supports a gazillion of 3D formats, and many types of filters and operators for remeshing, cleaning, and modifying 3D structures.

The latest versions of MeshLab (the 1.3.0 Beta) now also supports EisenScript directly.

You can directly import EisenScript (.es) files using the standard ‘File | Open’ dialog, or you can choose ‘Filters | Create New Mesh Layer | Structure Synth Mesh Creation’ and paste your EisenScript into the text edit field.

Since MeshLab directly compiles in the Structure Synth code, the resulting structures should be 100% compatible.

This opens up a lot of possibilities for preparing Structure Synth objects for use in other software, such as third party raytracers. I also think it may be possible to prepare a 3D structure in a format suitable for 3D printing (where there are special restrictions on the geometry), but I’ll have to look into this.

There is a vast amount of commands and operators to explore in MeshLab, and I think this integration is a great step forward for using Structure Synth in a wider context.

Creating a Raytracer for Structure Synth

Updated november 17th, 2011

Structure Synth has quite flexible support for exporting geometry to third-party raytracers, but even though I’ve tried to make it as simple as possible, the Template Export system can be difficult to use. It requires knowledge of the scene description format used by the target raytracer, and of the XML format used by the templates in Structure Synth. Besides that, exporting and importing can be slow when dealing with complicated geometry.

So I decided to implement a simple raytracer inside Structure Synth. I probably could have integrated some existing open-source renderer, but I wanted to have a go at this myself. The design goal was to create something reasonably fast, aiming for interesting, rather than natural, results.

The first version of the raytracer is available now in SVN, and will be part of the next Structure Synth release.

How to use the raytracer

The raytracer has a few settings which can be controlled by issuing ‘set’ commands in the EisenScript.

The following is a list of commands with their default values given as argument:

set raytracer::light [0.2,0.4,0.5]

Sets the position of a light in the scene. If a light source position is not specified, the default position is a light placed at the lower, left corner of the viewport. Notice that only a single light source is possible as of now. This light source controls the specular and diffuse lightning, and the hard shadow positions. The point light source model will very likely disappear in future versions of Structure Synth – I’d prefer environment lightning or something else that is easier to setup and use.

set raytracer::shadows true

This allows you to toggle hard shadows on or off. The shadow positions are determined by the light source position above.

Rendering without and with anti-alias enabled

set raytracer::samples 6

This sets the number of samples per pixel. Notice the actual number of samples is the square of this argument, i.e. a value of 2 means 2×2 camera rays will be traced for each pixel. The default value is 6×6 samples for ‘Raytrace (in Window)’ and 8×8 samples for ‘Raytrace (Final)’. This may sound like a lot of samples per pixel, but the number of samples also control the quality of the depth-of-field or ambient occlusion rendering. If the image appears noisy, increase the sample count.

To the left, a render with a single Phong light source. To the right, the same picture using ambient occlusion

set raytracer::ambient-occlusion-samples 1

Ambient occlusion is a Global Illumination technique for calculating soft shadows based on geometry alone. Per default the number of samples is set to 1. This may not sound like a lot, but each pixel will be sampled multiple times courtesy of the ‘raytracer::samples’ count – this makes sense because the ‘raytracer::samples’ will be used to sample both the lens model (for anti-alias and depth-of-field) and the ambient occlusion. And when I get a chance to implement some better shader materials, the samples can also be used there as well. Notice, that as above, the number refers to samples per dimensions. Example: if ‘raytracer::ambient-occlusion-samples = 3′ and ‘raytracer::samples = 2′, a total of 3x3x2x2=36 samples will be used per pixel.

Depth-of-Field example

set raytracer::dof [0.23,0.2]

Enables Depth-Of-Field calculations. The first parameter determines the distance to the focal plane, in terms of the viewport coordinates. It is always a number between 0 and 1. The second parameter determines how blurred objects away from the focal plane appear. Higher values correspond to more blurred foregrounds and backgrounds.

Hint: in order to get the viewport plane distance to a given object, right-click the OpenGL view, and ‘Toggle 3D Object Information’. This makes it possible to fit the focal plane exactly.

set raytracer::size [0x0]

Sets the size of the output window. If the size is 0x0 (the default choice), the output will match the OpenGL window size. If only one dimension is specified, e.g. ‘set raytracer::size [0x600]’, the missing dimension will be calculated such that the aspect ratio of the OpenGL window will be matched. Be careful when specifying both dimensions: the aspect ratio may be different.

set raytracer::max-threads 0

This determines how many threads, Structure Synth will use during the rendering. The default value of 0, means the system will suggest an ideal number of threads. For a dual-core processor with hyper-threading, this means four threads will be used. Lower the number of threads, if you want to use the computer for other tasks, and it is unresponsive.

set raytracer::voxel-steps 30

This determines the resolution of the uniform grid used to accelerate ray intersections. Per default a simple heuristic is used to control the resolution based on the number of objects. A value of 30 means that a grid with 30x30x30 cells will be used. The uniform grid used by the raytracer is not very efficient for non-uniform structure, and will likely be replaced in a future version of Structure Synth.

set raytracer::max-depth 5

This is the maximum recursion depth of the raytracer (for transparent and reflective materials).

Finally two material settings are available:

set raytracer::reflection 0.0

Simple reflection. A value between 0 and 1.

set raytracer::phong [0.6,0.6,0.3]

The first number determines the ambient lightning, the second the diffuse, and the third the specular lightning. Diffuse and specular lightning depends on the location of the light source.

It is also possible to apply the materials to individual primitives in Structure Synth directly. This is done by tagging the objects.

Consider the following EisenScript fragment:

Rule R1 {
   { x 1 } sphere::mymaterial

The sphere above now belongs to the ‘mymaterial’ class, and its material settings may be set using the following syntax:

set raytracer::mymaterial::reflection 0.0
set raytracer::mymaterial::phong [0.6,0.6,0.0]

An important tip: writing these long parameter setting names is tedious and error-prone. But I’ve added a list with the most used EisenScript and Raytracer commands to the context menu in Structure Synth editor window. Just right-click and select a command.

Folding Space II: Kaleidoscopic Fractals

Another type of interesting 3D fractal has appeared over at the Kaleidoscopic 3D fractals, introduced in this thread, by Knighty.

Once again these fractals are defined by investing the convergence properties of a simple function. And similar to the Mandelbox, the function is built around the concept of folds. Geometrically, a fold is simply a conditional reflection: you reflect a point in a plane, if it is located on the wrong side of the plane.

It turns out that just by using plane-folds and scaling, it is possible to create classic 3D fractals, such as the Menger cube and the Sierpinsky tetrahedron, and even recursive versions of the rest of the Platonic solids: the octahedron, the dodecahedron, and the icosahedron.

Example of a recursive dodecahedron

The kaleidoscopic fractals introduce an additional 3D rotation before and after the folds. It turns out that these perturbations introduce a rich variety of interesting and complex structures.

I’ve followed the thread and implemented most of the proposed systems by modifying Subblue’s Pixel Bender scripts.

Below are some of my images:

The Menger Sponge

My first attempts. Pixel Bender kept crashing on me, until I realized that there is a GPU timeout in Windows Vista (read this for a solution).

The Sierpinsky

Then I moved on to the Sierpinsky. The sequence below shows something characteristic for these fractals: the first slightly perturbed variations look artificial and synthetic, but when the system is distorted, it becomes organic and alive.

The Icosahedron

I also tried the octahedron and dodecahedron, but my favorite is the icosahedron. Especially knighty’s hollow variant.

Arbitrary Planes

One nice thing about these systems is, that you do not necessarily need to derive a complex distance estimator – you can also just modify the distance estimator code, and see what happens. These last two images were constructed by modifying existing distance estimators.

It will be interesting to see where this is going.

Many fascinating 3D fractals have appeared at over the last few weeks. And GPU processing now makes it is possible to explore these systems in real-time.

Cinder – Creative Coding in C++

Cinder is a new C++ library for creative applications. It is free, open-source, and cross-platform (Windows, Mac, iPhone/iPad, but no Linux). Think of it as Processing, but in C++.

Cinder offers classes for image processing, matrix, quaternion, spline and vector math, but also more general stuff like XML, HTTP, IO, and 2D Graphics.

The more generic stuff is implemented via third-party libraries, such as TinyXML, Cairo, AntTweakBar (a simple GUI), Boost (smart pointers and threads) and system libraries (QuickTime, Cocoa, DirectAudio, OpenGL) – certainly an ambitious range of technologies and uses.

Their examples are impressive, especially some of the demos by Robert Hodgin (flight404):

Cymatic Ferrofluid by flight404 (be sure to watch the videos).

Robert Hodgin has also created a very nice Cinder tutorial, which guides you through the creation of a quite spectacular particle effect.

Finally, it should be noted the openFrameworks offers related functionality, also based on C++.

A Few Links

…some old, some new.

The Demoscene

It was only a matter of time, before a Mandelbox would show up on the Demoscene:

Hochenergiephysik by Still is a 4K demo, featuring the Mandelbox. If 4KB sounds bloated, Still has also created a 1K demo: Futurism by Still.

And while we are at it, may I suggest these as well: The Cube by Farbrausch, Rove by Farbraush, and Agenda Circling Forth by Fairlight & Cncd.

New software

NodeBox 2.0 is out in a beta version. The big news is that it is now available for Windows. It also sports a graph-based GUI for patching nodes together.

Tacitus is a GUI for creating per-pixel GPU effects. In concept it is similar to Pixel Bender. It has a very nice look and feel, but a big short-coming is that it is not possible to directly edit the GPU scripts in the GUI – you have to compile your script to a plugin via an included compiler. Another feature I miss, is the ability to directly navigate the camera using the mouse on the viewport, instead of using sliders (something Pixel Bender also doesn’t support). But Tacitus is still in beta, and it will be interesting to see where it is going. It comes with a single plugin, a port of Subblue’s Mandelbulb Pixel Bender plugin. Tacitus is Windows only.

NeuroSystems Substance is an ‘Evolutionary and Organic Art Creator’. Some interesting concepts here, including a real-time global illumination raytracer (video here). Unfortunately, the raytracer is not part of the free viewer. Surprisingly, NeuroSystems impressive visualization technology seems to originate from SIMPLANT, a real-time 3D breast implant simulator. Substance is Windows only, and the full (non-free) versions should be released very soon.

Gifts for Geeks

A Calabi-Yau Manifold Crystal sculpture.

A Gömböc. “The ‘Gömböc’ is the first known homogenous object with one stable and one unstable equilibrium point, thus two equilibria altogether on a horizontal surface. It can be proven that no object with less than two equilibria exists.”

The Reality of Fractals

“… no one, not even Benoit Mandelbrot himself […] had any real preconception of the set’s extraordinary richness. The Mandelbrot set was certainly no invention of any human mind. The set is just objectively there in the mathematics itself. If it has meaning to assign an actual existence to the Mandelbrot set, then that existence is not within our mind, for no one can fully comprehend the set’s endless variety and unlimited complication.”

Roger Penrose (from The Road to Reality)

The recent proliferation of 3D fractals, in particular the Mandelbox and Mandelbulb, got me thinking about the reality of these systems. The million dollar question is whether we discover or construct these entities. Surely these systems give the impression of exploring uncharted territory, perhaps even looking into another world. But the same can be said for many traditional man-made works of art.

I started out by citing Roger Penrose. He is a mathematical Platonist, and believes that both the fractals worlds (such as the Mandelbrot set) and the mathematical truths (such as Fermat’s last theorem) are discovered. In his view, the mathematical truths have an eternal, unchanging, objective existence in some kind of Platonic ideal world, independent of human observers.

In Penrose’s model, there are three distinct worlds: the physical world, the mental world (our perception of the physical world), and the cryptic Platonic world. Even Penrose himself admits that the connections and interactions between these worlds are mysterious. And personally I cannot see any kind of evidence pointing in favor of this third, metaphysical world.

Designer World by David Makin

Roger Penrose is a highly renowned mathematician and physicist, and I value his opinions and works highly. In fact, it was one of his earlier books, The Emperors New Mind, that in part motivated me to become a physicist myself. But even though he is probably one of the most talented mathematicians living today, I am not convinced by his Platonist belief.

Personally, I subscribe to the less exotic formalist view: that the mathematical truths are the theorems we can derive by applying a set of deduction rules to a set of mathematical axioms. The axioms are not completely arbitrary, though. For instance, a classic mathematical discipline, such as Euclidean geometry, was clearly motivated by empirical observations of the physical world. The same does not necessarily apply to modern mathematical areas. For instance, Lobachevsky’s non-Euclidean geometry, was conceived by exploring the consequences of modifying one of Euclid’s fundamental postulates (interestingly non-Euclidean geometry later turned out to be useful in describing the physical world through Einstein’s general theory of relativity).

But if modern mathematics has become detached from its empirical roots, what governs the evolution of modern mathematics? Are all formal systems thus equally interesting to study? My guess is that most mathematicians gain some kind of intuition about what directions to pursue, based on a mixture of trends, historical research, and feedback from applied mathematics.

Mandelballs by Krzysztof Marczak [Mandelbox / Juliabulb mix]

Does my formalist position mean that I consider the Mandelbrot set to be a man-made creation, in the same category as a Picasso painting or a Bach concert? Not exactly. Because I do believe in a physical realism (in the sense that I believe in a objective, physical world independent of human existence), and since I do believe some parts of mathematics is inspired by this physical world and tries to model it, I believe some parts of mathematics can be attributed an objective status as well. But it is a weaker kind of objective existence: the mathematical models and structures used to describe reality are not persistent and ever-lasting, instead they may be refined and altered, as we progressively create models with greater predictive power. And I think this is the reason fractals often resemble natural structures and phenomena: because the mathematics used to produce the fractals was inspired by nature in the first place. Let me give another example:

Teeth by Jesse

Would a distant alien civilization come up with the same Mandelbrot images as we see? I think it is very likely. Any advanced civilization studying nature, would most likely have created models such as the natural numbers, the real numbers, and eventually the complex numbers. The complex numbers are extremely useful when modeling many physical phenomena, such as waves or electrodynamics, and complex numbers are essential in the description of quantum mechanics. And if this hypothetical civilization had computational power available, eventually someone would investigate the convergence of a simple, iterated system like z = z2 + c. So there would probably be a lot of overlapping mathematical structures. But there would also be differences: for instance the construction of the slightly more complex Mandelbox set contains several human-made design decisions, making it less likely to be invented by our distant civilization.

I think there is a connection to other areas of generative art as well. In the opening quote Penrose claims that no-one could have any real preconception of the Mandelbrot sets extraordinary richness. And the same applies to many generative systems: they are impossible to predict and often surprisingly complex and detailed. But this does not imply that they have a meta-physical Platonic origin. Many biological and physical systems share the same properties. And many of the most interesting generative systems are inspired by these physical or biological systems (for instance using models such as genetic algorithms, flocking behavior, cellular automata, reaction-diffusion systems, and L-systems).

Another point to consider is, that creating beautiful and interesting fractal images as the ones above, requires much more than a simple formula. It requires aesthetic intuition and skills to choose a proper palette, find an interesting camera location, and it takes many hours of formula parameter tweaking. I know this from my experiments with 3D fractals – I’m very rarely satisfied with my own results.

But to sum it all up: Even though fractals (and generative systems) may posses endless variety and unlimited complication, there is no need to call upon metaphysical worlds in order to explain them.

Folding Space: The Mandelbox Fractal

Over at another interesting 3D fractal has emerged: the Mandelbox.

It originates from this thread, where it was introduced by Tglad (Tom Lowe). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are part of the set. The iterated function used for the Mandelbox set has a nice geometric interpretation: it corresponds to repeated folding operations.

In contrast to the organic presence of the Mandelbulbs, the Mandelbox has a very architectural and structural feel to it:

The Mandelbox probably owes it name to the cubic and square patterns that emerge at many levels:

It is also possible to create Julia Set variations:

Juliabox by ‘Jesse’ (click to see the large version of this fantastic image!)

Be sure to check out Tom Lowe’s Mandelbox site for more pictures and some technical background information, including links to a few (Windows only) software implementations.

I tried out the Ultra Fractal extension myself. This was my first encounter with Ultra Fractal, and it took me quite some time to figure out how to setup a Mandelbox render. For others, the following steps may help:

  1. Install Ultra Fractal (there is a free trial version).
  2. Choose ‘Options | Update Public Formulas…’ to get some needed dependencies.
  3. Install David Makin’s MMFwip3D package and install it into the Ultra Fractal formula folder – it is most likely located at “%userprofile%\Documents\Ultra Fractal 5\Formulas”.
  4. In principle, this is all you need. But the MMFwip3D formulas contain a vast number of parameters and settings. To get started try using some existing parameter set: this is a good starting point. In order to use these settings, simply select the text, copy it into the clipboard, and paste it into an Ultra Fractal fractal window.

The CPU-based implementations are somewhat slow, taking minutes to render even small images – but it probably won’t be long before a GPU-accelerated version appear: Subblue has already posted images of a PixelBender implementation in progress.

Liquid Pixels

A few days ago, I found this little gem on the always inspiring WOWGREAT tumbleblog:

Hiroshi Sugimoto, Tyrrhenian Sea (1994); Pixels sorted by Blue.

It was created by Jordan Tate and Adam Tindale by sorting the pixels of this picture. See their site, Lossless processing, for more pixel shuffling goodness.

Adding some randomness

I liked the concept, so I decided to try something similar. But instead of sorting the pixels, I had this vague idea of somehow stirring the pixels in an image, and letting the pixels settle into layers. Or something.

After trying out a few different schemes, I came up with the following procedure:

  1. Pick two pixels from a random column. Swap them if the upper pixel has a higher hue than the lower pixel.
  2. Pick two pixels from a random row. Swap them if the left pixel has a higher saturation (or brightness) than the right pixel.
  3. Repeat the above steps until the image converges.

The first step takes care of the layering of the colors. The second step adds some structure and makes sure the process converges. (If we just swapped two arbitrary pixels, based on the hue, the process would not converge. By swapping pixels column-wise and adding the second step, we impose a global ordering on the image).


The following are some examples of the method (applied to some photos I took while visiting California recently).

San Diego.


Del Mar I.


Californian Desert.


Del Mar II.


And finally, a classic:

Mona Lisa



The Image Reshuffler was implemented in Processing. It was my first try with Processing, and as I expected it was quite easy to use. Personally, I prefer C++ and Qt, but for someone new to programming, Processing would be an obvious choice.

The script is available here: reshuffler.pde.