[SDL 3.2.4] Corrupt Sprite Rendering After Windows 11 Upgrade (SDL_GPU)

Hi everyone,

I’m struggling to understand what’s causing this issue.

In my game, rendering works like this:

  1. Draw the game into a low-resolution texture.
  2. Get the actual window size/resolution, calculate a scaling factor, and use an SDL_GPUViewport to upscale the render target while keeping it centered.
  3. Draw the render target to the window’s swapchain texture.

This setup was working perfectly until I installed Windows 11 on my machine a few days ago. Now, all my sprites look like the images attached to this topic.

Interestingly, I tested the same executable on another machine with a different Nvidia GPU but the same drivers—running Windows 10—and everything renders perfectly.

I’ve read about potential DPI-related issues that might be causing this, but none of the solutions I’ve tried have worked so far. I couldn’t find any related topics on this exact issue, so apologies if I missed something.

Another thing that is worth mentioning (I think) is than when taking frame captures with RenderDoc the frame appears perfect without the corrupt looking textures.

Any insights would be greatly appreciated.

Thanks in advance!

1 Like

For anyone experiencing similar issues, the culprit appears to be Nvidia Driver version 572.16+. While OpenGL and DirectX 12 function normally, Vulkan exhibits graphical glitches. The current workaround is to navigate to the Nvidia Control Panel > Manage 3D Settings > Vulkan/OpenGL Present Mode and set it to ‘Prefer Native.’ It’s recommended to apply this setting on a per-program basis for better control.
Cheers!

1 Like

I don’t think stretching is what you want to be doing as the graphics will stretch if the window aspect ratio does not match that of the render texture.

I would recommend that instead you write to the screen using a simple fullscreen renderer that uses the following shaders:

struct FullScreenVertIn
{
    uint vid : SV_VertexID;
};

struct FullScreenFragIn
{
    float4 position : SV_Position;
    float2 tex_coord : TEXCOORD0;
};

// Vertex shader
FullScreenFragIn FullScreen_vert(FullScreenVertIn input)
{
    /*
        ID=0 -> Pos=[-1,-1], Tex=[0,0]
        ID=1 -> Pos=[ 3,-1], Tex=[2,0]
        ID=2 -> Pos=[-1,-3], Tex=[0,2]
    */
    FullScreenFragIn output;
    output.tex_coord = float2((input.vid * 2) & 2, input.vid & 2);
    output.position = float4(output.tex_coord * float2(2.0, -2.0) + float2(-1.0, 1.0), 0.0, 1.0);
    return output;
}

// Fragment shader
Texture2D<float4> t : register(t0, space2);
SamplerState s : register(s0, space2);

float4 FullScreen_frag(FullScreenFragIn input) : SV_Target0
{
    return t.Sample(s, input.tex_coord);
}

and as this shader doesn’t use any vertex buffers or index buffers you render like this:

SDL_GPUColorTargetInfo const color_target_info = {
    .texture = target_texture,
    .clear_color = sdl_color(colour::BLACK),
    .load_op = SDL_GPU_LOADOP_CLEAR,
    .store_op = SDL_GPU_STOREOP_STORE,
};

_render_pass = SDL_BeginGPURenderPass(command_buffer, &color_target_info, 1, nullptr);

SDL_BindGPUGraphicsPipeline(_render_pass, _pipeline.pipeline());

SDL_SetGPUViewport(_render_pass, viewport);

SDL_BindGPUVertexStorageBuffers(_render_pass, 0, nullptr, 0);

SDL_BindGPUVertexBuffers(_render_pass, 0, nullptr, 0);

SDL_BindGPUIndexBuffer(_render_pass, nullptr, SDL_GPU_INDEXELEMENTSIZE_32BIT);

SDL_GPUTextureSamplerBinding const fragment_sampler_binding = {
    .texture = render_texture,
    .sampler = g_app_state.game.samplers().linear_clamp(),
};
SDL_BindGPUFragmentSamplers(_render_pass, 0, &fragment_sampler_binding, 1);

SDL_DrawGPUPrimitives(_render_pass, 3, 1, 0, 0);

In order to calculate the viewport to use, follow this code from stackoverflow, which just needs updating when the view size changes.

This will give the following results:



1 Like