The idea behind this post goes back to a Fractal Forums discussion back in 2012, where Knighty came up with a clever scheme for ray marching polytopes. It also discusses the mathematics behind Jenn 3D (a tool for creating explicit geometry).

It is probably my most ambitious post yet – lots of interactive WebGL components – which also means it not very mobile friendly, I’m afraid.

Due to the complexity of the post, I did not use WordPress for this. Instead the blog post is hosted on GitHub, and can be found here:

https://syntopia.github.io/Polytopia/polytopes.html

Please let me know if you have ideas for improvements or corrections!

Hello! This has been __extremely__ helpful in learning about polytopes and symmetry. I’m at a point that I feel comfortable with coxeter groups and with the linear algebra involved, but I’m curious about the section where you determine the normals for the mirrors; you didn’t go into much detail on finding them. I can show that the vectors you’ve provided do indeed have the needed angles, but I do not understand what algorithm you used. In particular, what would be the normal for a fourth mirror?

Thanks.

I have a question regarding the overall process. I come from glsl using opengl in c++. There, you define the vertices in c++ and then give them to the shader via a vertexbuffer, which then colorizes it with a texture and outputted to the pixel as gl_FragColor.

But how is it done in those polytope animations from Knighty, Conner Bell, etc.? I can’t find any output or gl_FragColor. Obviously you just generate the vertices inside the shader (as opposed to my c++ expierence), and fold it into the fundamental Group. But then what? How do you get to the pixel from there?

I know, this is a really basic question that may seem silly for shader-pro. But I could really need a hint. I understand the basic concepts of generators, folding, etc., but the practical part somehow evades me.

Any help is appreciated.

Hi, I think the images you are referring to are raytraced using raymarching on implicit geometries – that means that there are no polygons – instead rays are traced for intersections against some mathematical function. For an introduction see e.g. : http://blog.hvidtfeldts.net/index.php/2011/06/distance-estimated-3d-fractals-part-i/

Hi Mikael. Thanks a lot for your reply.

As said before, for someone who had so far only loaded vertices and textures into shaders this method seemed daunting at first. Your assurance motivated me to keep digging. Now, after immersing myself in your tutorial and after reverse engineering several cut&fold demo shaders I am starting to get it.

It took me quite a while to realize pab/pbc/pca/… are just the intersections of the reflection planes and can be easily found by applying the cross product on their norm vectors. (I think you have an even easier way knowing that they are predefined by the angles of the reflection planes, but I’ll look into that later…)

Long story short: Thanks a lot for the excellent work you put up here!

I’m looking forward to dig into other topics as well.