Folding Space: The Mandelbox Fractal

Over at fractalforums.com another interesting 3D fractal has emerged: the Mandelbox.

It originates from this thread, where it was introduced by Tglad (Tom Lowe). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are part of the set. The iterated function used for the Mandelbox set has a nice geometric interpretation: it corresponds to repeated folding operations.

In contrast to the organic presence of the Mandelbulbs, the Mandelbox has a very architectural and structural feel to it:


The Mandelbox probably owes it name to the cubic and square patterns that emerge at many levels:


It is also possible to create Julia Set variations:


Juliabox by ‘Jesse’ (click to see the large version of this fantastic image!)

Be sure to check out Tom Lowe’s Mandelbox site for more pictures and some technical background information, including links to a few (Windows only) software implementations.

I tried out the Ultra Fractal extension myself. This was my first encounter with Ultra Fractal, and it took me quite some time to figure out how to setup a Mandelbox render. For others, the following steps may help:

  1. Install Ultra Fractal (there is a free trial version).
  2. Choose ‘Options | Update Public Formulas…’ to get some needed dependencies.
  3. Install David Makin’s MMFwip3D package and install it into the Ultra Fractal formula folder – it is most likely located at “%userprofile%\Documents\Ultra Fractal 5\Formulas”.
  4. In principle, this is all you need. But the MMFwip3D formulas contain a vast number of parameters and settings. To get started try using some existing parameter set: this is a good starting point. In order to use these settings, simply select the text, copy it into the clipboard, and paste it into an Ultra Fractal fractal window.

The CPU-based implementations are somewhat slow, taking minutes to render even small images – but it probably won’t be long before a GPU-accelerated version appear: Subblue has already posted images of a PixelBender implementation in progress.

Liquid Pixels

A few days ago, I found this little gem on the always inspiring WOWGREAT tumbleblog:


Hiroshi Sugimoto, Tyrrhenian Sea (1994); Pixels sorted by Blue.

It was created by Jordan Tate and Adam Tindale by sorting the pixels of this picture. See their site, Lossless processing, for more pixel shuffling goodness.

Adding some randomness

I liked the concept, so I decided to try something similar. But instead of sorting the pixels, I had this vague idea of somehow stirring the pixels in an image, and letting the pixels settle into layers. Or something.

After trying out a few different schemes, I came up with the following procedure:

  1. Pick two pixels from a random column. Swap them if the upper pixel has a higher hue than the lower pixel.
  2. Pick two pixels from a random row. Swap them if the left pixel has a higher saturation (or brightness) than the right pixel.
  3. Repeat the above steps until the image converges.

The first step takes care of the layering of the colors. The second step adds some structure and makes sure the process converges. (If we just swapped two arbitrary pixels, based on the hue, the process would not converge. By swapping pixels column-wise and adding the second step, we impose a global ordering on the image).

Examples

The following are some examples of the method (applied to some photos I took while visiting California recently).

San Diego.

…

Del Mar I.

…

Californian Desert.

…

Del Mar II.

…


And finally, a classic:

Mona Lisa

…

Implementation

The Image Reshuffler was implemented in Processing. It was my first try with Processing, and as I expected it was quite easy to use. Personally, I prefer C++ and Qt, but for someone new to programming, Processing would be an obvious choice.

The script is available here: reshuffler.pde.

Shader Toy

For some time I’ve been wanting to play around with pixel (fragment) shaders, but I couldn’t find a proper playground.

Then I stumbled upon Shader Toy, by Inigo Quilez (whom I’ve mentioned several times on this blog). A couple of things make Shader Toy stand out:

It runs inside your browser. It uses the emerging WebGL standard, which is JavaScript bindings for OpenGL (ES) 2.0. OpenGL can be used directly inside a Canvas HTML element, including support for custom shaders. As Shader Toy demonstrates, this makes it possible to do some very impressive stuff, such as real-time GPU-accelerated raytracing inside an element on a web page.

The examples are great. While Shader Toy itself is mostly a thin wrapper around the WebGL functionality, the great thing about it is the example shaders: 2D fractals and Demo Scene effects, but also complex examples like the Slisesix 4K demo, and examples of raytracing, and complex fractals, like the Quaternion Julia set, and the Mandelbulb.

The only problem with WebGL is, that it is not supported by the current generation of browsers.

The good news is that the nightly builds of Firefox, Safari (WebKit), and Chromium (Google Chrome) all support it, and are quite easy to install: this is a good place for more information. If you use the Chromium builds, you don’t have to worry about messing up your existing browser configuration – the nightly builds are standalone versions and can be run without installation.

There are lots of complex shader tools out there: for instance, NVIDIAs FX Composer, AMDs Rendermonkey, TyphoonLabs OpenGL Shader Designer, and Lumina, but Shader Toy makes it very easy to get started with shaders. And it provides a rare insight into how those amazing 4K demos were made.

Mandelbulb Implementations

Several implementations have appeared since the Mandelbulb surfaced a couple of months ago.

The first public GPU implementation I know of was created by ‘cbuchner1’. It is based on a sample from NVIDIAs OptiX SDK, and features anaglyphic 3D, ambient occlusion, phong shading, reflection, and environment maps. It can be downloaded here (Windows only and requires a forum signup).


Example made with cbuchner1’s implementation

Very interestingly this binary runs on my laptops modest GeForce 8400M. I am a bit puzzled about this – NVIDIA state that the OptiX SDK requires a Quadro or a Tesla card, and I am not able to run the Julia OptiX demo, that cbuchner1s app is derived from.

Subblue has also created a Mandelbulb implementation, released as a Pixel Bender script and a Quartz composer plugin. A number of interesting customizations makes this my favorite choice: it is possible to explore negative and fractional powers, switch to Julia sets, and the lightning options can be fine-tuned. The only drawback is that Pixel Bender does not make it possible to directly rotate, zoom, and translate the camera – you have to rely on sliders for that.


Example created by Subblue.

Iñigo Quílez has also created a GPU implementation, but unfortunately he has not released any code yet. A couple of videos are available on Youtube, though: Part 1, Part 2, Part 3.


Quilez also discovered this intimate connection between the Shroud of Turin and the Mandelbulb.

The MathFuncRenderer also has a Mandelbulb implementation. I had a few quirks with this one – I had to install OpenAL, and the UI was quite non-responsive, but this may be due to my graphics card.

Another very interesting implementation is the GigaVoxels Mandelbulb: Whereas most implementations cast rays and use a distance estimator to speed up the ray marching, GigaVoxels use voxels stored into an Octree, which is populated on-the-fly.

For other implementations keep an eye on Fractal Forums Mandelbulb Implementation category.

Generative Art 2009 Conference (Milano)

This week (15-17 December) I attended the Generative Art 2009 conference in Milano, Italy. It is a conference with a quite broad and diverse focus attended by both artists and academics from many different fields. And, as far as I know, it is the only conference on Generative Art.

I do not think of myself as an artist, and neither do I work in the academia. So it was not at all obvious for me to attend the conference. But when I got a an email from Celestino Soddu (the chairman of the conference) asking me to consider participating in the conference, I became curious since the conference revolves around many of the concepts that interests me: genetic algorithms, swarms and flocking, multi-agent systems, sound synthesis, architecture, digital photography, etc…

So I went, and gave a short introduction to Structure Synth and its history (Chomsky’s formal grammars, Chris Coynes context-free design grammars, and the relation to Lindenmayer systems).

The paper is available here (PDF):
Structural Synthesis using a Context-Free Grammar Approach.


Structure Synth image.

I will start out by saying that I enjoyed the conference a lot. People were very friendly and interesting, and I had a lot of good discussions. And I think the diverse mixture of different cultures, nationalities, fields and practices is exciting – even though it also meant that some of the presentations became too tangential to my interests – and some were even nearly incomprehensible to me.

Some of my personal highlights in the conference were Arne Eigenfeldts “In Equilibrio”, a multi-agent music system, Daniel Bisig and Tatsuo Unemis “Swarms on Stage – Swarm Simulations for Dance Performance” and Philip Galanters theoretical essay on “Fitness and Complexification in Evolutionary Art” – even though I do not agree with Philip here: I think the idea of establishing an aesthetic fitness function, which could be used by genetic algorithms, is a futile endeavor. The AI community seems to have had little progress with mimicking human behavior the last forty years (e.g. see my conversation with last years Loebner prize contest winners), and surely aesthetic judgments require a lot beyond what is needed to pass a simple Turing test.


Sculpture (found somewhere in Milano).

Another highlight was Celestino Soddu’s own introduction – it contained a slideshow with an enormous amount of his own generated architectural works, and I think it demonstrated an impressive and consistent approach to generative architecture. But it also made me wonder if we will ever see a skyscraper created by a generative system.

As a final note, I also think the academic community should try to establish some sort of communication to the vibrant generative art internet community and demo scene practitioners. I am not sure exactly how this could be accomplished, but many interesting projects seems to emerge from these settings.

Assorted Links

Generative Music Software

Adam M. Smith has begun working on cfml – a context-free music language. It is a Context-Free Design Grammar – for music. I’m very interested in how this develops.


A graphical representation of cfml output (original here)

Cfml is implemented as an Impromptu library. Impromptu is a live coding environment, based on the Scheme language, and has existed since 2005. Andrew Sorensen, the developer of Impromptu, has created some of the most impressive examples of live coding I have seen. In particular, the last example, inspired by Keith Jarrett’s Sun Bear Concerts, is really impressive. (I might be slightly biased here, since I believe that Jarrett’s solo piano concerts – especially the Köln Concert and the Sun Bear Concerts – rank among the best music ever made).

Finally, Supercollider 140 is a selection of audio pieces all created in Supercollider in 140 characters or less. An interesting example of using restrictions to spur creativity. Another example is the 200 char Processing sketch contest.

Free Indy Game Development

This month also saw the release of the Unreal Development Kit, basically a version of the Unreal Engine 3, that is free for non-commercial use. This is great news for amateur game developers, but for me, the big question was whether this could be used as a powerful platform for generative art or live demos. I downloaded the kit and played around with it for a while, but while the 3D engine is stunning, UDK seems very geared towards graphical development (I certainly do not want to do draw my programs, and the built-in Unrealscript does not impress me either).

In related news, that basic version of Unity 2.6 is now also free. The main focus of Unity is also game development, but from a generative art / live demo perspective it holds greater promise. Unity offers an advanced graphics engine with user-scriptable shaders, integrated PhysX physics engine, and 3D audio.

Unitys development architecture is also very solid: scripts are written in (JIT-compiled) JavaScript, and components can be written in C# (using Mono, the open-source .NET implementation). Using a dynamic scripting language such as JavaScript to control a more rigid body of classes written in a more strict, statically typed environment, such as C#, is a good way to manage complex software. All Mozilla software – including Firefox – is built using this model (JavaScript + XPCOM C++ components), and newer platforms, such as Microsoft’s Silverlight platform also use it (JavaScript + C# components).

I made a few tests with Unity, and it is simple to control and instance even pretty complex structures. I considered writing a simple Structure Synth viewer using Unity, but was unfortunately put a bit off, when I discovered that Screen Space Ambient Occlusion and Full Screen Post-Processing Effects are not part of the free basic edition. The iPhone version of the Unity engine is not free either, but that is probably as could be expected.

It will be interesting to see if Unity will be picked up by the Generative Art community.

SIGGRAPH Asia

Finally two papers presented at SIGGRAPH Asia 2009 should be noted:

Shadow Art creates objects which cast three different shadows.

Sketch2Photo creates realistic photo-montages from freehand sketches annotated with text labels.

Mandelbulb

A lot of sites have reported that a new, interesting 3D version of the Mandelbrot set has been discovered. The Mandelbulb has aesthetic qualities similar to Quaternion-Julia sets, but seems more diverse and suited for exploration.


“Cave of Lost Secrets” from Skytopia.

Skytopia has a great overview complete with many stunning images.

A good way to view the basic structure is this 56 Megapixel render from Skytopia (using the Seadragon viewer – requires Silverlight):


As of now, I do not know of any released software capable of generating Mandelbulbs, but it probably won’t be long:

Recent posts by Iñigo Quílez (who produced the Kindernoiser Quaternion-Julia set GPU renderer) indicate that he is very to close to completing a fast GPU implementation. These posts also include the basic source-code, which I believe should make it possible to port to other targets, for instance Pixel Bender. Apparently Quílez has cooked up a distance estimator, and a fake ambient occlusion scheme (based on orbit traps) for these Mandelbulbs, which sounds very promising.

Quaternion Julia sets and GPU computation.

Subblue has released another impressive Pixel Bender plugin, this time a Quaternion Julia set renderer.

The plugin can be downloaded here.

Quaternions are extensions of the complex numbers with four independent components. Quaternion Julia sets still explore the convergence of the system z ← z2 + c, but this time z and c are allowed to be quaternion-valued numbers. Since quaternions are essentially four-dimensional objects, only a slice (the intersection of the set with a plane) of the quaternion Julia sets is shown.

Quaternion Julia sets would be very time consuming to render if it wasn’t for a very elegant (and surprising) formula, the distance estimator, which for any given point gives you the distance to the closest point on the Julia Set. The distance estimator method was first described in: Ray tracing deterministic 3-D fractals (1989).

My first encounter with Quaternion Julia sets was Inigo Quilez’ amazing Kindernoiser demo which packed a complete renderer with ambient occlusion into a 4K executable. It also used the distance estimator method and GPU based acceleration. If you haven’t visited Quilez’ site be sure to do so. It is filled with impressive demos, and well-written tech articles.

Transfigurations (another Quaternion Julia set demo) from Inigo Quilez on Vimeo.

In the 1989 Quaternion Julia set paper, the authors produced their images on an AT&T Pixel Machine, with 64 CPU’s each running at 10 megaFLOPS. I suspect that this was an insanely expensive machine at the time. For comparison, the relatively modest NVIDIA GeForce 8400M GS in my laptop has a theoretical maximum processing rate of 38 gigaFLOPS, or approximately 60 times that of the Pixel Machine. A one megapixel image took the authors of the 1989 paper 1 hour to generate, whereas Subblues GPU implementation uses ca. 1 second on my laptop (making it much more efficient than what would have been expected from the FLOPS ratio).

GPU Acceleration and the future.

These days there is a lot of talk about using GPUs for general purpose programming. The first attempts to use GPUs to speed up general calculations relied on tricks such as using pixel shaders to perform calculations on data stored in texture memory, but since then several API’s have been introduced to make it easier to program the GPUs.

NVIDIAs CUDA is currently by far the most popular and documented API, but it is for NVIDIA only. Their gallery of applications demonstrates the diversity of how GPU calculations can be used. AMD/ATIs has their competing Stream API (formerly called Close To Metal) but don’t bet on this one – I’m pretty sure it is almost abandoned already. Update: as pointed out in the comments, the new ATI Stream 2.0 SDK will include ATIs OpenCL implemention, which for all I can tell is here to stay. What I meant to say was, that I don’t think ATIs earlier attempts at creating a GPU programming interface (including the Brook+ language) are likely to catch on.

Far more important is the emerging OpenCL standard (which is being promoted in Apples Snow Leopard, and is likely to become a de facto standard). Just as OpenGL, it is managed by the Khronos group. OpenCL was originally developed by Apple, and they still own the trademark, which is probably why Microsoft has chosen to promote their own API, DirectCompute. My guess is that CUDA and Brook+ will slowly fade away, as both OpenCL and DirectCompute will come to co-exist just the same way as OpenGL and Direct3D do.

For cross-platform development OpenCL is therefore the most interesting choice, and I’m hoping to see NVIDIA and AMD/ATI release public drivers for Windows as soon as possible (as of now they are in closed beta versions).

GPU acceleration could be very interesting from a generative art perspective, since it suddenly becomes possible to perform advanced visualization, such as ray-tracing, in real-time.

A final comment: a few days ago I found this quaternion Julia set GPU implementation for the iPhone 3GS using OpenGL ES 2.0 programmable shaders. I think this demonstrates the sophistication of the iPhone hardware and software platform – both that a hand-held device even has a programmable GPU, but also that the SDK is flexible enough to make it possible to access it.