# Rendering 3D fractals without a distance estimator

I have written a lot about distance estimated 3D fractals, and while Distance Estimation is a fast and elegant technique, it is not always possible to derive a distance estimate for a particular system.

So, how do you render a fractal, if the only knowledge you have is whether a given point belongs to the set or not? Or, in other words, how much information can you extract if the only information you have is a black-box function of the form:

I decided to try out some simple brute-force methods to see how they would compare to the DE methods. Contrary to my expectations, it turned out that you can actually get reasonable results without a DE.

First a couple of disclaimers: brute-force methods can not compete with distance estimators in terms of speed. They will typically be a magnitude slower. And if you do have more information available, you should always use it: for instance, even if you can’t find a distance estimator for a given escape time fractal, the escape length contains information that can be used to speed up the rendering or create a surface normal.

The method I used is not novel nor profound: I simply sample random points along the camera ray for each pixel. Whenever a hit is found on the camera ray, the sampling will proceed on only the interval between the camera and the hit point (since we are only interested in finding the closest pixels), e.g. something like this:

(The Near and Far distances are used to restrict the sample space, and speed up rendering)

There are different ways to choose the samples. The simplest is to just sample uniformly (as in the example above), but I found that a stratified approach, where the camera ray segment is divided into equal pieces and a sample is choosen from each part works better. I think the sampling scheme could be improved: in particular once you found a hit, you should probably bias the sampling towards the hit to make convergence faster. Since I use a progressive (double buffered) approach in Fragmentarium, it is also possible to read the pixel depths of adjacent pixels, which probably also could be used.

Now, after sampling the camera rays you end up with a depth map, like this:

(Be sure to render to a texture with 32-bit floats – a 8-bit buffer will cause quantization).

For distance estimated rendering, you can use the gradient of the distance estimator to obtain the surface normal. Unfurtunately this is not an option here. We can, however, calculate a screen space surface normal, based on the depths of adjacent pixels, and transform this normal back into world space:

(Update: I found out that GLSL supports finite difference derivatives through the dFdx statement, which made the code above much simpler).

Now we can use a standard lighting scheme, like Phong shading. This really brings a lot of detail to the image:

In order to improve the depth perception, it is possible to apply a screen space ambient occlusion scheme. Recently, there was a very nice tutorial on SSAO on devmaster, but I was to lazy to try it out. Instead I opted for the simplest method I could think of: simply sample some pixels in a neighborhood, and count how many of them that are closer to the camera than the center pixel.

This is how this naive ambient occlusion scheme works:

(Notice that for pixels with no hits, I’ve choosen to lighten, rather than darken them. This creates an outer glow effect.)

Now combined with the Phong shading we get:

I think it is quite striking how much detail you can infer simply from a depth map! In this case I didn’t color the fractal, but nothing prevents you from assigning a calculated color. The depth buffer information only uses the alpha channel.

Here is another example (Aexion’s MandelDodecahedron):

While brute-force rendering is much slower than distance estimation, it is possible to render these systems at interactive frame rates in Fragmentarium, especially since responsiveness can be improved by using progressive rendering: do a number of samples, then storing the best found solution (closest pixel) in a depth buffer (I use the alpha channel), render the frame and repeat.

There are a couple of downsides to brute force rendering:

• It is slower than distance estimation
• You have to rely on screen space methods for ambient occlusion, surface normals, and depth-of-field
• Anti-aliasing is more tricky since you cannot accumulate and average. You may render at higher resolution and downsample, or use tiled rendering, but beware that screen space ambient occlusion introduce artifacts which may be visible on tile edges.

On the other hand, there are also advantages:

• Much simpler to construct
• Interior renderings are trivial – just reverse the ‘inside’ function
• Progressive quality rendering: just keep adding samples, and the image will converge.

To use the Fragmentarium script, just implement an ‘inside’ function:

It is also possible to use the raytracer on existing DE’s – here a point is assumed to be inside a fractal if the DE returns a negative number, and outside if the DE returns a positive one.

The script can be downloaded as part of Fragmentarium source distribution (it is not yet in the binary distributions). The following files are needed:

## 6 thoughts on “Rendering 3D fractals without a distance estimator”

1. Thank you so much for your “naive ambient occlusion” algorythm. Since I already had the Z-buffer for a depth of field effect, it took me not more than 10 minutes to implement your solution and the result is stunning. I am on my way to post a few screenshots on my blog.

2. Glad you like it! It actually worked much better than expected, though I think there is room for improvement.

3. Well, I like simple things because I also am a lazy man and since your code does the job, you can be proud of it. And congratulations for your work, your applications are stunning. Keep it up Thx again.

4. Looks nice, Nikolay – but difficult to read the russian text Do you use any special tricks for calculating normals or sampling the volume?

5. Yep, some tricks are present. Sorry for poor english…

1. I use classical raytracing — shoots the rays from screen pixels into the fractal surface. If ray (from pixel on screen) is intersects with fractal surfaces then we stop calculation for this ray (because we will not see anything interesting later ). Then store 3d-coordinate of dot of intersection at z-buffer. If ray leave bounding volume then we store NULL.
2. Cutting slices. Step-by-step from camera to fractal volume.
3. Each pixel on screen (rendered image) is represent dot on fractal surface (we store them into the z-buffer). We can unite not-null contiguous dots into the triangles. Then calculate normals…