SDL3 GPU with GLSL

Hello, I’m having trouble with my GLSL shaders when using the GPU api
This document shows some things, but not all. Apparently, SPIR-V does not allow to use good old uniforms, so I must put them into a uniform block. I’m using the textured quad app from the examples repo. When I run my app I don’t see any graphics. RenderDoc says: 18 Incorrect API Use High Execution 0 Shader referenced a descriptor set 1 that was not bound

My vertex shader uniform block:

// note: Uniform Blocks are required in Vulkan, because F... REASONS?
layout (set = 1, binding = 0) uniform Projection
{
    mat4 proj_matrix;
} u_projection;

In my fragment shader:

layout (set = 3, binding = 1) uniform Font
{
    vec2 unit_range;
} u_font;

So, what is the correct binding for uniform blocks? Or do I need to put my variables into a uniform buffer object?

Even in OpenGL, using uniform blocks/buffers had been the preferred way to go for a while. You can specify directly in the shader where to bind them, update multiple values at the same time, and (in OpenGL) you didn’t have to call glUniformData...() every time you changed shaders.

Anyway, the error message RenderDoc gave you says your shader is trying to use a resource that hasn’t been bound. Are you binding a buffer to the appropriate slot / calling SDL_PushGPUVertex/FragmentUniformData() for the appropriate slot and shader stage?

edit: you can drop the u_projection and u_font.

Hello sjr

But if I do, how do I access the data from the main function? For example:
gl_Position = u_projection.proj_matrix * vec4(position, 0.0, 1.0);

I didn’t know. Maybe because I was using OpenGL 3.x for little 2D games that required very little amount of data

Let’s see:
I acquire a draw command, begin a render pass, bind the graphics pipeline, vertex buffer, index buffer, fragment sampler (all of this almost identical to the example code from the repo), then I do:

// I'm sending a matrix4x4 to the vertex shader
SDL_PushGPUVertexUniformData(draw_command, /* slot_index / binding = 0 */ 0U, glm::value_ptr(m_proj), sizeof(glm::mat4));
// A single float to the fragment shader
SDL_PushGPUFragmentUniformData(draw_command, /* slot_index / binding = 1 */ 1U, &this->m_px_range, sizeof(float));

And here I’m using 0 and 1 for the slot_index, I’ve got those numbers from the SDL_CreateGPUShader documentation. What should be passed as that parameter I’m not really sure

I was taking a look at the TexturedAnimatedQuad example. The author uses 1 uniform buffer when he loads each one of the shaders. I have added that
For the binding “slots”, I think I got it now. You must use the set number described in the documentation, then you start binding each uniform from 0. Example for the fragment shader: layout (set = 3, binding = 0) uniform .... Right?

Now I get no errors from RenderDoc, and I can see my font being rendered. The only problem is the texture is all messed up. It is a 24 bit .BMP file. Is SDL_PIXELFORMAT_ABGR8888 the right texture format for it?

Nevermind, I got it working

But the font looks pixelated, it is supposed to look crisp. My sampler:

SDL_GPUSamplerCreateInfo sampler_create_info{};
    sampler_create_info.min_filter = SDL_GPU_FILTER_NEAREST;
    sampler_create_info.mag_filter = SDL_GPU_FILTER_LINEAR; // Magnification filter MUST be linear for SDF rendering to work as intended!
    sampler_create_info.mipmap_mode = SDL_GPU_SAMPLERMIPMAPMODE_NEAREST;
    sampler_create_info.address_mode_u = SDL_GPU_SAMPLERADDRESSMODE_CLAMP_TO_EDGE;
    sampler_create_info.address_mode_v = SDL_GPU_SAMPLERADDRESSMODE_CLAMP_TO_EDGE;
    sampler_create_info.address_mode_w = SDL_GPU_SAMPLERADDRESSMODE_CLAMP_TO_EDGE;

m_sampler = SDL_CreateGPUSampler(m_gpu_device, &sampler_create_info);

No errors are returned from any creation function

Good, glad you got it working!

Is that just a raw texture, or are you doing something in the fragment shader? If it’s a raw texture, make sure it isn’t just on/off alpha transparency, and check your blend state info when creating your pipeline.

1 Like

I don’t know what you mean by raw texture, but I was thinking about the blending state. It is almost copy/pasted from the example code:

SDL_GPUColorTargetDescription color_target_desc{};
    // For the output color target, pick the same texture format as the swapchain
    color_target_desc.format = SDL_GetGPUSwapchainTextureFormat(m_gpu_device, m_window);
    color_target_desc.blend_state = {};
    // Enable blending
    color_target_desc.blend_state.enable_blend = true;
	color_target_desc.blend_state.alpha_blend_op = SDL_GPU_BLENDOP_ADD;
	color_target_desc.blend_state.color_blend_op = SDL_GPU_BLENDOP_ADD;
	color_target_desc.blend_state.src_color_blendfactor = SDL_GPU_BLENDFACTOR_SRC_ALPHA;
	color_target_desc.blend_state.dst_color_blendfactor = SDL_GPU_BLENDFACTOR_ONE_MINUS_SRC_ALPHA;
	color_target_desc.blend_state.src_alpha_blendfactor = SDL_GPU_BLENDFACTOR_SRC_ALPHA;
	color_target_desc.blend_state.dst_alpha_blendfactor = SDL_GPU_BLENDFACTOR_ONE_MINUS_SRC_ALPHA;

    SDL_GPUGraphicsPipelineTargetInfo pipeline_target_info{};
    pipeline_target_info.color_target_descriptions = &color_target_desc;
    pipeline_target_info.num_color_targets = 1U;

My fragment shader is doing this (almost copy/pasted from the msdf font atlas generator project):

void main()
{
    // Sample the MSDF 2D texture
    vec3 msdf = texture(u_tex_sampler, vs_tex_coords).rgb;
    // Calculate the signed distance
    float sd = median(msdf.r, msdf.g, msdf.b);

    // Distance in pixels to the edge
    float pixel_dist = screen_px_range() * (sd - 0.5);
    float opacity = clamp(pixel_dist + 0.5, 0.0, 1.0);

    // Multiply by text color (from the vertex shader)
    final_color = vec4(vs_color.rgb, opacity);
}

By “raw texture” I meant that you’re just putting the texture on screen unmodified, as opposed to doing something with it in the fragment shader. But since you’re doing SDF font rendering, you’re definitely “doing something” with the texture.

As to the sharp edges, it looks (to me) like what happens when you use a blend state set up for premultiplied textures (textures whose color values have been multiplied by their alpha channel) with non-premultiplied ones. And, in fact, the blend state you have set up is for premultiplied textures. Since you aren’t just sampling a premultiplied texture and then passing it through, you need to multiply the color by the alpha channel in the fragment shader. Something like:

final_color = vec4(vs_color.rgb * opacity, opacity);

Good information, I didn’t know that. Thanks
I can still see some aliasing around the characters, and I think the text is too bright now. Maybe I should try another blending modes to find the right one

If the text becomes too bright after the alpha channel multiplication then it’s a sign your alpha channel is brighter than 1.0 somehow. Because multiplying your colors by 1.0 (maximum opacity) should give you the exact same colors.

edit: how are you calculating the screen pixel distance? Apparently some methods used for it for 3D SDF rendering give bad antialiasing if the distance field doesn’t have wide enough range.

Well, I was prioritizing making it work instead of understanding every step of the process. My bad. I just pasted almost the same code from the repo

float screen_px_range()
{
    // vs_tex_coords comes directly from the vertex shader
    vec2 screen_tex_size = vec2(1.0) / fwidth(vs_tex_coords);
    // unit_range is a float uniform, I set its value to 2.0
    return max(0.5 * dot(u_font.unit_range, screen_tex_size), 1.0);
}

If you are more familiarized with SDF rendering than me, can you please explain what this unit range variable and screen_px_range() function are doing?

So the readme (:winking_face_with_tongue:) for that says:

Here, screenPxRange() represents the distance field range in output screen pixels. For example, if the pixel range was set to 2 when generating a 32x32 distance field, and it is used to draw a quad that is 72x72 pixels on the screen, it should return 4.5 (because 72/32 * 2 = 4.5). For 2D rendering, this can generally be replaced by a precomputed uniform value.

It then goes on to say:

For rendering in a 3D perspective only, where the texture scale varies across the screen, you may want to implement this function with fragment derivatives in the following way. I would suggest precomputing unitRange as a uniform variable instead of pxRange for better performance.

// screenPxRange() code here

screenPxRange() must never be lower than 1. If it is lower than 2, there is a high probability that the anti-aliasing will fail and you may want to re-generate your distance field with a wider range.

1 Like

So, if I understand it correctly, this value will give more or less “resolution” or quality, to the rendered text as it is being scaled, right?
And for 2D graphics (I use an orthographic projection) I just need a constant value, right?

From what I understand it’s the distance from the pixel on screen to the glyph in the SDF.

And yeah, since it’s 2D you know how big the quad will be on screen, and you know the size and range the distance field was generated at, so you can precompute it and pass it in as a shader argument in a UBO.

I see. Much clearer now. Thanks!