Here is a small collection of questions, I’ve have been asked from time to time about Fragmentarium.
Why does Fragmentarium crash?
The most common cause for this is, that a drawing operation takes too long to complete. On Windows, there is a two seconds maximum time limit on jobs executing on the GPU. After that, a GPU watchdog timer will kill the graphics driver and restore it (resulting in unresponsive, possible black, display for 5-10 seconds). The host process (here Fragmentarium) will also be shut down by Windows.
If this happens, you will get errors like this:
"The NVIDIA OpenGL driver lost connection with the display driver and is unable to continue. The application must close.... Error code: 8"
"Display driver stopped responding and has recovered"
Try lowering the number of ray steps, the number of iterations or use the preview feature. Notice that for high-resolution renders, the time limit is for each tile, so it is still possible to do high resolution images.
Another solution is to change the watchdog timer behaviour via the TDR Registry Keys:
The TDR registry keys were not defined in my registry, but I added a
DWORD TdrDelay = 30
DWORD TdrDdiDelay = 30
which sets a 30 second window for GPU calculations. You have to restart Windows to apply these settings. Be advised that changing these settings may render your system completely unresponsive if the GPU crashes.
Why does Fragmentarium not work on my GPU?
Even though GLSL is a well-defined standard, the are differences between different vendor implementations and different drivers. The computers I have use have Nvidia cards, so Fragmentarium is most tested on Nvidia’s platform.
The ATI compiler is more strict about casting. For instance, adding an integer to a float results in an error, and not in a warning:
float a = b + 3;
The ATI compiler also seems to suffer from some loop optimization problems: some of the fractals in Fragmentarium does not work on my ATI card, if the number of iterations is not locked (or hard-coded in the fragment code). Inserting a condition into the loop also solves the problem.
The Intel GPU compiler also has some issues: for instance, some operations on literal constants results in errors:
vec3 up = normalize(vec3(1.0,0.0,0.0)); // fails on Intel
I get weird errors about disabled widgets?
If you see warnings like:
Unable to find 'FloorNormal' in shader program. Disabling widget.
it does not indicate a problem. For instance, the warning above appears when the EnableFloor checkbox is locked and disabled. In this case, the GLSL will optimize the floor code away, and the FloorNormal uniform variable will no longer be part of the compiled program – hence the warning. These warnings can be safely ignored.
Why does your Fragmentarium images look much nicer than the ones I get in Fragmentarium?
The images I post are always rendered at a higher resolution using the High Resolution Render option. I then downscale the images to a lower resolution. This reduces alias and rendering artifacts. Use a painting program with a proper downscaling filter – I use Paint.NET which seems to work okay.
Before rendering in hi-res, use the Tile Preview to zoom in, and adjust the details level and number of ray steps, so the image looks okay in the Tile Preview resolution.
Why is Fragmentarium slower than BoxPlorer/Fractals.io/…?
The default ray tracer in Fragmentarium has grown somewhat complex. It is possible to gain speed by locking variables, but this is somewhat tedious. Another solution is to change to another raytracer, e.g. change
(a faster version of the ones above, which will remember all settings)
(Tom Beddards raytracer, which uses a set of different parameters)
Can I use double precision in my shaders?
Most modern graphics cards support double precision numbers in hardware, so in principle, yes, if your card supports it. In practice, it is much more difficult:
First, the Fragmentarium presets and sliders (including camera settings), will only transfer data (uniforms) to the GPU as single precision floats. This is not the biggest problem, since you might only need double precision for the numbers that accumulate errors. The Qt OpenGL wrappers I use doesn’t support double types, but it would be possible to work around this if needed.
Second, while newer GLSL versions do support double precision numbers (through types such as double, dvec3, dmat3), not all of the built-in functions support them. In particular, there are no trigonometric or exponential functions. So no cos(double), exp(double), etc. The available functions are described here.
Third, it might be slow: When Nvidia designed their Fermi architecture, used in their recent graphic cards, they built it, so that is should be able to process double precision operations with half the speed of single precision operations (which is optimal given the double size of the numbers). However, they decided that their consumer branch (the Nvidia cards) should be artificially limited to run double precision calculations at 1/8 the speed of single precision numbers. Their Tesla line of graphics card (which shares architecture with the Geforce branch), is not artifically throttled and will run double precision at half the speed of single precision. As for the AMD/ATI cards, I do not think they have similar limitations, but I’m not sure about this.
If you really still want to try, you must insert this command at the top of the script:
#extension GL_ARB_gpu_shader_fp64 : enable
(Or use a #version command, but this will like cause problems with most of the existing examples).
Finally, what about emulating double precision numbers, instead of using the hardware versions? While this sounds very slow, it is probably not much slower than the throttled implementation. The downside is, the GLSL does not support operator overloading: there is no syntactically nice way to implement such functionality: instead of just changing your data types, you must convert all code from e.g.
A = B*C+D;
A = dplus(dmul(B,C),D);
If you are still interested, here is a great introduction to emulating doubles in GLSL.
How do I report errors, so that it is easiest for you to correct them?
(Well, actually nobody asks this questions)
If you find an error, please report the following:
– Operating system, and the graphics card you are using
– The version of Fragmentarium, and whether you built it yourself
– A reproducible description of the steps that caused the error (if possible).
You may mail errors to me at mikael (at) hvidtfeldts.net.