# Syntopia Blog Update

It has not been possible to post comments at my blog for some months. Apparently, my reCAPTCHA plugin was broken (amazingly, spam comments still made their way into the moderation queue).

This should be fixed now.

I’m also on twitter now: @SyntopiaDK, where I’ll post links and news releated to generative systems, 3D fractals, or whatever pops up.

Finally, if you are near Stockholm, some of my images are on display at a small gallery (from July 9th to September 11th): Kungstensgatan 27.

### Pixel Bender 3D

Adobe has announced Pixel Bender 3D:

If I understand it correctly, it is a new API for Flash, and not as such a direct extension of the Pixel Bender Toolkit. So what does it do?

As far as I can tell, it is simply a way to write vertex and fragment shader for Flash. While this is certainly nice, I think Adobe is playing catchup with HTML5 here – many browsers already support custom shaders through WebGL (in their development builds, at least). Or compare it to a modern 3D browser plugin such as Unity, with deferred lightning, depth-of-field, and occlusion culling…

And do we really need another shader language dialect?

### Flash raytracer

Kris Temmerman (Neuro Productions) has created a raytracer in Flash, complete with ambient occlusion and depth-of-field:

Kris has also produced several other impressive works in Flash:

### Chaos Constructions

Quite and Orange won the 4K demo at Chaos Constructions 2010 with the very impressive ‘CDAK’ demo:

### Ex-Silico Fractals

This YouTube video shows how to produce fractals without a computer. I’ve seen video feedback before, but this is a clever setup using multiple projectors to create iterated function systems.

### Vimeo Motion Graphics Award

‘Triangle’ by Onur Senturk won the Vimeo Motion Graphics Award. The specular black material looks good. Wonder if I could create something similar in Structure Synth’s internal raytracer?

# Liquid Pixels

A few days ago, I found this little gem on the always inspiring WOWGREAT tumbleblog:

Hiroshi Sugimoto, Tyrrhenian Sea (1994); Pixels sorted by Blue.

It was created by Jordan Tate and Adam Tindale by sorting the pixels of this picture. See their site, Lossless processing, for more pixel shuffling goodness.

I liked the concept, so I decided to try something similar. But instead of sorting the pixels, I had this vague idea of somehow stirring the pixels in an image, and letting the pixels settle into layers. Or something.

After trying out a few different schemes, I came up with the following procedure:

1. Pick two pixels from a random column. Swap them if the upper pixel has a higher hue than the lower pixel.
2. Pick two pixels from a random row. Swap them if the left pixel has a higher saturation (or brightness) than the right pixel.
3. Repeat the above steps until the image converges.

The first step takes care of the layering of the colors. The second step adds some structure and makes sure the process converges. (If we just swapped two arbitrary pixels, based on the hue, the process would not converge. By swapping pixels column-wise and adding the second step, we impose a global ordering on the image).

### Examples

The following are some examples of the method (applied to some photos I took while visiting California recently).

San Diego.

Del Mar I.

Californian Desert.

Del Mar II.

And finally, a classic:

Mona Lisa

### Implementation

The Image Reshuffler was implemented in Processing. It was my first try with Processing, and as I expected it was quite easy to use. Personally, I prefer C++ and Qt, but for someone new to programming, Processing would be an obvious choice.

The script is available here: reshuffler.pde.

# Generative Art Links & Resources

I’ve started collecting links for Generative Art software, blogs, papers, websites and related stuff here:

# Quaternion Julia sets and GPU computation.

Subblue has released another impressive Pixel Bender plugin, this time a Quaternion Julia set renderer.

Quaternions are extensions of the complex numbers with four independent components. Quaternion Julia sets still explore the convergence of the system z ← z2 + c, but this time z and c are allowed to be quaternion-valued numbers. Since quaternions are essentially four-dimensional objects, only a slice (the intersection of the set with a plane) of the quaternion Julia sets is shown.

Quaternion Julia sets would be very time consuming to render if it wasn’t for a very elegant (and surprising) formula, the distance estimator, which for any given point gives you the distance to the closest point on the Julia Set. The distance estimator method was first described in: Ray tracing deterministic 3-D fractals (1989).

My first encounter with Quaternion Julia sets was Inigo Quilez’ amazing Kindernoiser demo which packed a complete renderer with ambient occlusion into a 4K executable. It also used the distance estimator method and GPU based acceleration. If you haven’t visited Quilez’ site be sure to do so. It is filled with impressive demos, and well-written tech articles.

Transfigurations (another Quaternion Julia set demo) from Inigo Quilez on Vimeo.

In the 1989 Quaternion Julia set paper, the authors produced their images on an AT&T Pixel Machine, with 64 CPU’s each running at 10 megaFLOPS. I suspect that this was an insanely expensive machine at the time. For comparison, the relatively modest NVIDIA GeForce 8400M GS in my laptop has a theoretical maximum processing rate of 38 gigaFLOPS, or approximately 60 times that of the Pixel Machine. A one megapixel image took the authors of the 1989 paper 1 hour to generate, whereas Subblues GPU implementation uses ca. 1 second on my laptop (making it much more efficient than what would have been expected from the FLOPS ratio).

## GPU Acceleration and the future.

These days there is a lot of talk about using GPUs for general purpose programming. The first attempts to use GPUs to speed up general calculations relied on tricks such as using pixel shaders to perform calculations on data stored in texture memory, but since then several API’s have been introduced to make it easier to program the GPUs.

NVIDIAs CUDA is currently by far the most popular and documented API, but it is for NVIDIA only. Their gallery of applications demonstrates the diversity of how GPU calculations can be used. AMD/ATIs has their competing Stream API (formerly called Close To Metal) but don’t bet on this one – I’m pretty sure it is almost abandoned already. Update: as pointed out in the comments, the new ATI Stream 2.0 SDK will include ATIs OpenCL implemention, which for all I can tell is here to stay. What I meant to say was, that I don’t think ATIs earlier attempts at creating a GPU programming interface (including the Brook+ language) are likely to catch on.

Far more important is the emerging OpenCL standard (which is being promoted in Apples Snow Leopard, and is likely to become a de facto standard). Just as OpenGL, it is managed by the Khronos group. OpenCL was originally developed by Apple, and they still own the trademark, which is probably why Microsoft has chosen to promote their own API, DirectCompute. My guess is that CUDA and Brook+ will slowly fade away, as both OpenCL and DirectCompute will come to co-exist just the same way as OpenGL and Direct3D do.

For cross-platform development OpenCL is therefore the most interesting choice, and I’m hoping to see NVIDIA and AMD/ATI release public drivers for Windows as soon as possible (as of now they are in closed beta versions).

GPU acceleration could be very interesting from a generative art perspective, since it suddenly becomes possible to perform advanced visualization, such as ray-tracing, in real-time.

A final comment: a few days ago I found this quaternion Julia set GPU implementation for the iPhone 3GS using OpenGL ES 2.0 programmable shaders. I think this demonstrates the sophistication of the iPhone hardware and software platform – both that a hand-held device even has a programmable GPU, but also that the SDK is flexible enough to make it possible to access it.

# Modul (2009)

Maxim Zhestkov is a 23-year old russian designer. Modul (2009) is his diploma work for his Master of Fine Arts degree.

Also check out 005 by Zhestkov.

# Fractal Explorer Plugin

In July Subblue released another Pixel Blender plugin, called the Fractal Explorer Plugin – for exploring Julia sets and fractal orbit maps. I didn’t get around to try it out until recently, but it is really a great tool for exploring fractals.

Most people have probably seen examples of Julia and Mandelbrot sets – where the convergence properties of the series generated by repeated application of a complex-valued function is investigated.

The most well-known example is the iteration of the function z ← z2+c. The Mandelbrot set is created by plotting the convergence rate for this function while c varies over the complex plane. Likewise, the Julia set is created for a fixed c while varying the initial z-value over the complex plane.

Glynn1 by Subblue (A Julia-set where an exponent of 1.5 is used).

Where ordinary Julia and Mandelbrot sets only take into account whether the series created by the iterated function tends towards infinity (diverges) or not, fractal orbits instead uses another image as input, and checks whether the complex number series generated by the function hits a (non-transparent) pixel in the source image. This allows for some very fascinating ‘fractalization’ of existing images.

A fractal orbit showing a highly non-linear transformation of a Mondrian picture.

Subblue suggests starting out using Ernst Haeckel beautiful illustrations from the book Artforms of Nature, and he has put up a small gallery with some great examples:

An example of an orbit mapped Ernst Haeckel image.

To try out Subblue’s filter, download the Pixel Blender SDK and load his kernel filter and an input image of choice. It is necessary to uncheck the “Build | Turn On Flash Player Warnings and Errors” menu item in order to start the plugin. On my computer I also often experience that the Pixel Blender SDK is unable to detect and initialize my GPU – it sometimes help to close other programs and restart the application. The filter executes extremely fast on the GPU – often with more than 100 frames per second, making it easy to interactively explore and discover the fractals.

As I final note, I implemented a fractal drawings routine myself in Structure Synth (version 1.0) just for fun. It is implemented as a hidden easter egg, and not documented at all, but the code belows shows an example of how to invoke it:

#easter Size: 800x800 MaxIter: 150 Term: (-0.2,0) Term: 1*Z^1.5 BreakOut: 2 View: (-0.0,0.2) -> (0.7,0.9) 

Arguable, this code is not very optimized (it is possible to add an unlimited number of terms, making the function evaluation somewhat slow), but still it takes seconds to calculate an image making it more than a hundred times slower than the Pixel Blender GPU solution.

# Flickr Findings: Human Chaos Project

Markus Mooslechner has created some fascinating visualizations of electrocardiograms in a project called ‘Human Chaos’.

The pictures depict the electrical activity of the heart at different parts of the lifecycle, spanning from pre-birth to death.

See more examples at his Flickr stream.

# Metamorphosis (Processing)

Impressive Processing animations made by Glenn Marshall:

In order to see these at the best quality, go to Full Screen in the Vimeo player, make sure HD is on, and that Scaling is off.

# Generative Art in 4KB

Once again I’m extremely impressed by what shows up on the demoscene.

Slisesix – Generative Art in 4K

Inigo Quilez managed to create the following picture within a 4KB executable. In fact he managed to fit a renderer and the generative code for both the scene and the textures in only 3900 bytes.