A slight digression from the world of fractals and generative art: This post is about drawing high-quality graphs of high-frequency functions.

Yesterday, I needed to draw a few graphs of some simple functions. I started out by using Iñigo Quilez’s nice little GraphToy, but my functions could not be expressed in a single line. So I decided to implement a graph plotter example in Fragmentarium instead.

Plotting a graph using a GLSL shader is not an obvious task – you have to frame the problem in a way, such that each pixel can be processed individually. This is in contrast to the standard way of drawing graphs – where you choose a uniform set of values for the x-axis, and draw the lines connecting the points in the (x,f(x)) set.

So how do you do it for each pixel individually?

The first thing to realize is, that it is easy to determine whether a pixel is above or below the graph – this can be done by checking whether y<f(x) or y>f(x). The tricky part is, that we only want to draw the *boundary* – the curve that separates the regions above and below the graph.

So how do we determine the boundary? After having tried a few different approaches, I came up with the following simple edge detection procedure: for each pixel, choose a number of samples, in a region around the pixel center. Then count how many samples are above, and how many samples are below the curve.

If all samples are above, or all samples are below, the pixel is not on the boundary. However, if there are samples both above and below, the boundary must be passing through the pixel.

The whole idea can be expressed in a few lines of code:

for (float i = 0.0; i < samples; i++) { for (float j = 0.0;j < samples; j++) { float f = function(pos.x+ i*step.x)-(pos.y+ j*step.y); count += (f>0.) ? 1 : -1; } } // base color on abs(count)/(samples*samples)

It should be noted, that the sampling can be improved by adding a small amount of jittering (random offsets) to the positions – this reduces the aliasing at the cost of adding a small amount of noise.

## Highfrequency functions and aliasing

So why it this better than the common ‘connecting line’ approach?

Because this approach deals with the high-frequency information much better.

Consider the function f(x)=sin(x*x*x)*sin(x).

Here is a plot from GraphToy:

Notice how the graph near the red arrows seem to be slowly varying. This is not the true behavior of function, but an artifact of the way we sample our function. Our limited resolution cannot capture the high frequency components, which results in aliasing.

Whenever you do anything media-related on a computer, you will at some point run into problems with aliasing: whether you are doing sound synthesis, image manipulation, 3D rendering or even drawing a straight line.

However, using the pixel shader approach, aliasing is much easier to avoid. Here is a Fragmentarium plot of the same function:

Even though it may seem backwards to evaluate the function for all pixels on the screen, it makes it possible to tame the aliasing, and even on a modest GPU, the procedure is fast enough for realtime interactions.

The example is included in GitHub under Examples/2D Systems/GraphPlotter.frag.

Thought I’d add the link here as well; I did an implementation that plugs directly into shadertoy ( http://www.iquilezles.org/apps/shadertoy/ ).

Code: http://pastebin.com/je4qww6M

(image here http://twitpic.com/5m8d2s )

It is possible to skip all but one of the “vertical” samples, as you can calculate the distance to the function directly (distAvgEval in code).

Also, while the method produces a smooth result, it doesn’t give information about how much “time” the function spends in each pixel (think oscilloscope). The worst case is when all samples are either above or below a pixel, resulting in too high occlusion. Doing a direct mutli-sampling gives the correct effect (proxyEval in code), but produces a less smooth result.

Good points!

The oscilloscope-type plot is nice and does reveal some information about the velocity of the curve, but for some applications I’d still prefer something closer to what you would get if the curve was drawn by pencil.

Regards, Mikael.

Hi.

First, this is great. I ran into this sampling problem when I was almost a kid 🙂

I believe this approach has the potential of becoming the standard way that every graph on every screen gets displayed.

But, I found something in this approach that is not perfect:

The part of the curve that is sufficiently zoomed, is black, with a given thickness.

The part that is fuzzy is grayscale, and at its darkest, it’s black.

Still, it’s a fact that the curve goes everywhere in this area, at least once. It should then not become lighter than the rest just because we switched to something like multisampling. Like it is now, it suggests that the curve is less there in those gray areas than in the normal part, while in reality the opposite is true.

When you understand what I mean, this render just looks terribly wrong 🙂 : it should be at least black everywhere, and even blacker where the density is higher.

Like an osilloscope. (well, an oscilloscope where the spot drives at constant speed on the 2D surface). A solution could be to draw the normal curve with 50% grey, saving some darkness for later :), or use another color for “extreme screen burning” :).

As for the implementation…I’ll leave that to you if you don’t mind 🙂

Regards.

Yep, the color is lighter near the top and the bottom of the sinus blobs. This is because the density is lower there – you can not really set an arbitrary line width in this approach, so there is room for improvement. By the way, a scope would have darker colors near the top and bottom since the curve velocity is slower (see Pixelmagers pictures above)

The oscilloscope-style render, on the other hand, is coherent and means something, but its meaning is tied to the nature of the curve : an y=f(x) curve, where x moves at constant speed. We can say the color gradient shows the probability of presence of f(x), at a given height, for x randomly picked between the left of the pixel and its right 🙂

The eye understands this, and this has a physical reality, otherwise the phosphore wouldnt act like this in the first place.

This is highly non-isotropic : this oscilloqcope-style render is not applicable to a generic 2D curve, say, (x,y) = f(t).

I think I sense that, in order to make the generic render I talked about, you’re going to have to simulate the plot driving a car on the 2D plane, a car whose engine is stuck at a given speed, and you can only steer left and right. 🙂

To do this with only the y=f(x) formula, I guess you’ll need to derivate some things, but I’m too lazy tonight 🙂

Also, I guess you have the skills to do this with your hands tied in your back!

Wow, you’re online, and fast.

What I mean and what you probably wanted to achieve is simple :

It is the way it would look if it was rendered by connecting the dots, but rendered in a ridiculoulsy high and sufficient resolution, then downsampled.

(if you’re extremely picky, you shoud even do the high-res render properly antialiased, because of the 45degree-line smaller density problem)

I said it already, the answer is to draw normal parts in 50% grey, and when denser, draw darker…or, black that switches to red for example, or a full color cycle.

Ah, I just realize that your approach, that checks for above or below, is very likely to be non-isotropic as well, and is also designed for y=f(x) curves and not generic plane curves.

Then I’m not that sure it is like you drew it with a pen…

Thinking about it, maybe (maybe) sin(x) should not be darker near the x axis…

Maybe equal, maybe the opposite…

A quick test I did on paint seems to confirm this.

I suspect the 45degree line problem is biting you…

You see, near the x axis, the slope of sin(x) is precisely this value…

All this because pixels form a grid of squares, this is not perfectly isotropic.

The method will also work for f(x,y)=0 curves, but not for parametric curves, f(t)=(x,y).

When I choose samples to test whether they are over or below the curve, I sample uniformly on a disc, not a square. (This is not part of the code above, but I have a rejection test: “if (i*i+j*j>samples*samples) continue;”)

The method has flaws – it is not possible to specify an arbitrary thick line – but it is a fast alternative to other methods I’ve tried.

So what program would u say is best for graphing?

Related post on distance-estimates, from Quilez

http://iquilezles.org/www/articles/distance/distance.htm

Hi Pixelmager, very interesting to use distance estimation to plot graphs, but I don’t get very good results? Have you tried it yourself for high-frequency functions (e.g. sin(1.0/tan(x)) ) – I get some terrible artifacts. Perhaps I made a stupid coding error some place…

Someone made a distance-estimated graph-plotter on shadertoy:

https://www.shadertoy.com/view/Msl3Ws

(my previous example now at https://www.shadertoy.com/view/4sB3zz )

– inserting the highly oscillating function from this article indeed does produce very noticable problems with distance estimation.