Hello, I’m having trouble with my GLSL shaders when using the GPU api
This document shows some things, but not all. Apparently, SPIR-V does not allow to use good old uniforms, so I must put them into a uniform block. I’m using the textured quad app from the examples repo. When I run my app I don’t see any graphics. RenderDoc says: 18 Incorrect API Use High Execution 0 Shader referenced a descriptor set 1 that was not bound
My vertex shader uniform block:
// note: Uniform Blocks are required in Vulkan, because F... REASONS?
layout (set = 1, binding = 0) uniform Projection
{
mat4 proj_matrix;
} u_projection;
Even in OpenGL, using uniform blocks/buffers had been the preferred way to go for a while. You can specify directly in the shader where to bind them, update multiple values at the same time, and (in OpenGL) you didn’t have to call glUniformData...() every time you changed shaders.
Anyway, the error message RenderDoc gave you says your shader is trying to use a resource that hasn’t been bound. Are you binding a buffer to the appropriate slot / calling SDL_PushGPUVertex/FragmentUniformData() for the appropriate slot and shader stage?
But if I do, how do I access the data from the main function? For example: gl_Position = u_projection.proj_matrix * vec4(position, 0.0, 1.0);
I didn’t know. Maybe because I was using OpenGL 3.x for little 2D games that required very little amount of data
Let’s see:
I acquire a draw command, begin a render pass, bind the graphics pipeline, vertex buffer, index buffer, fragment sampler (all of this almost identical to the example code from the repo), then I do:
// I'm sending a matrix4x4 to the vertex shader
SDL_PushGPUVertexUniformData(draw_command, /* slot_index / binding = 0 */ 0U, glm::value_ptr(m_proj), sizeof(glm::mat4));
// A single float to the fragment shader
SDL_PushGPUFragmentUniformData(draw_command, /* slot_index / binding = 1 */ 1U, &this->m_px_range, sizeof(float));
And here I’m using 0 and 1 for the slot_index, I’ve got those numbers from the SDL_CreateGPUShader documentation. What should be passed as that parameter I’m not really sure
I was taking a look at the TexturedAnimatedQuad example. The author uses 1 uniform buffer when he loads each one of the shaders. I have added that
For the binding “slots”, I think I got it now. You must use the set number described in the documentation, then you start binding each uniform from 0. Example for the fragment shader: layout (set = 3, binding = 0) uniform .... Right?
Now I get no errors from RenderDoc, and I can see my font being rendered. The only problem is the texture is all messed up. It is a 24 bit .BMP file. Is SDL_PIXELFORMAT_ABGR8888 the right texture format for it?
Is that just a raw texture, or are you doing something in the fragment shader? If it’s a raw texture, make sure it isn’t just on/off alpha transparency, and check your blend state info when creating your pipeline.
By “raw texture” I meant that you’re just putting the texture on screen unmodified, as opposed to doing something with it in the fragment shader. But since you’re doing SDF font rendering, you’re definitely “doing something” with the texture.
As to the sharp edges, it looks (to me) like what happens when you use a blend state set up for premultiplied textures (textures whose color values have been multiplied by their alpha channel) with non-premultiplied ones. And, in fact, the blend state you have set up is for premultiplied textures. Since you aren’t just sampling a premultiplied texture and then passing it through, you need to multiply the color by the alpha channel in the fragment shader. Something like:
Good information, I didn’t know that. Thanks
I can still see some aliasing around the characters, and I think the text is too bright now. Maybe I should try another blending modes to find the right one
If the text becomes too bright after the alpha channel multiplication then it’s a sign your alpha channel is brighter than 1.0 somehow. Because multiplying your colors by 1.0 (maximum opacity) should give you the exact same colors.
edit: how are you calculating the screen pixel distance? Apparently some methods used for it for 3D SDF rendering give bad antialiasing if the distance field doesn’t have wide enough range.
Well, I was prioritizing making it work instead of understanding every step of the process. My bad. I just pasted almost the same code from the repo
float screen_px_range()
{
// vs_tex_coords comes directly from the vertex shader
vec2 screen_tex_size = vec2(1.0) / fwidth(vs_tex_coords);
// unit_range is a float uniform, I set its value to 2.0
return max(0.5 * dot(u_font.unit_range, screen_tex_size), 1.0);
}
If you are more familiarized with SDF rendering than me, can you please explain what this unit range variable and screen_px_range() function are doing?
Here, screenPxRange() represents the distance field range in output screen pixels. For example, if the pixel range was set to 2 when generating a 32x32 distance field, and it is used to draw a quad that is 72x72 pixels on the screen, it should return 4.5 (because 72/32 * 2 = 4.5). For 2D rendering, this can generally be replaced by a precomputed uniform value.
It then goes on to say:
For rendering in a 3D perspective only, where the texture scale varies across the screen, you may want to implement this function with fragment derivatives in the following way. I would suggest precomputing unitRange as a uniform variable instead of pxRange for better performance.
// screenPxRange() code here
screenPxRange() must never be lower than 1. If it is lower than 2, there is a high probability that the anti-aliasing will fail and you may want to re-generate your distance field with a wider range.
So, if I understand it correctly, this value will give more or less “resolution” or quality, to the rendered text as it is being scaled, right?
And for 2D graphics (I use an orthographic projection) I just need a constant value, right?
From what I understand it’s the distance from the pixel on screen to the glyph in the SDF.
And yeah, since it’s 2D you know how big the quad will be on screen, and you know the size and range the distance field was generated at, so you can precompute it and pass it in as a shader argument in a UBO.