Enabling floating point z-buffers in SDL3/OpenGL

I asked this on the Odin forum, but still haven’t found a solution, see Enabling floating point z-buffers in SDL3/OpenGL - Help - Odin - Forum but thank you to Barinzaya for trying.

I’m currently trying to expand the OpenGL example code into something a bit more like a real game, and learning OpenGL and Odin along the way. I thought I’d switch to a floating point z-buffer, as that just seems way easier to work with than integer ones, but I have gotten lost in all the incantations that don’t work. So far I have a 16 bit integer z-buffer when rendering to the back buffer, and a 24 bit integer z-buffer attached to a buffer object, and it seems to stay that way no matter what flags I pass around.

I have tried setting the global attributes GL_FLOATBUFFERS and GL_DEPTH_SIZE, as those seem like they might be relevant, but it doesn’t seem like they change from defaults no matter when I set them.

I’m on Windows 10, which might matter according to 15 year old SO answers.

Code is here: Some buggy test code · GitHub

I have set it up to produce a clearly visible z error that should be fixed with a floating point z-buffer, the two triangles ought to intersect at the point where they change colour.

If you are not familiar with Odin I don’t think the code should be too hard to follow. If you want to run it that should be fairly simple, install the Odin compiler, add it to your path, put the source file in its own folder along with the sdl3 dll and compile and run it with odin run .

Okay, I found out why from this article Outerra: Maximizing Depth Buffer Range and Precision

Turns out OpenGL is just broken, sabotaged either by extreme incompetence or malice. After shader code returns a z-value in the range -1 to 1, OpenGL remaps this value to 0 to 1, this transformation destroys the precision. nVidia introduced the glDepthRangedNV function, with the sole useful feature that you can call it with the values -1 and 1 in order to disable the remap, it was then standardised in a version that clamps both input values between 0 and 1.

It now makes sense why the z-buffer type doesn’t make a difference, the resulting range of possible numbers can all be represented exactly by everything but a 16 bit buffer.

Aha, it was fixed in 4.5, well not the DepthRange function, but they added a new function for enabling sanity: gl.ClipControl(gl.LOWER_LEFT,gl.ZERO_TO_ONE), first parameter can be used for flipping the picture upside down, second one completely does away with the negative part of depth clip space. So shove in that call and my code just works (at least on my machine).

I also noticed that my GPU and drivers (RDNA4) seemingly don’t have any z-buffer modes other than 16 bit integer and 32 bit float, if I request 24 or 32 bit integer I get a float buffer anyway.