SDL2 and SDL3 It takes several times longer to render a frame on Linux than on Windows

The time required to render a video frame using the ‘persist’ function is several times longer on Linux than on Windows


windows: R9 7945HX+4060

linux: i7 11800H+3060

On my other linux-arm64 computer, it can even achieve an astonishing 20ms per frame。
Is this a normal phenomenon?

Is vsync turned on or off?

linux and windows All off…

What FPS do you get? (assuming you render frames as fast as possible)

If both Windows and Linux have similar FPS maybe it’s just a question about how much work is being done right away and how much of it is saved to be done when RenderPresent is called. I’ve heard about “batching”, don’t know much about it, but maybe that could play a role?

I haven’t calculated fps, but my program is designed to play multiple video frames simultaneously (4 * 4=16). On Linux, I clearly feel that my video screen is experiencing lag.

I still suspect vsync. Your system might override whatever you specify in SDL. I believe vsync is always on when using Wayland.

If you haven’t done it already, write a small program that calls RenderPresent as fast as possible and measures how many times it runs per second just to see that the frame rate is not being restricted to 60 or whatever your monitor’s refresh rate is. If vsync is turned off the fps should normally be several hundreds or thousands.

ok, I will try my best to test it tonight, but I am not using Wayland, I am using X11

Try timing the entire frame or the fps

I don’t think double buffering plays a role here since x11 and windows are not supported SDL3/SDL_HINT_VIDEO_DOUBLE_BUFFER - SDL Wiki

I used a dual system test on my desktop computer at home, one for Windows and one for Ubuntu, and found that Linux renders one frame faster than Windows…
Why is this

windows:


Linux:

These computers are running the same code

@Peter87 @Levo

What do you mean by this and how did you come to that conclusion?

wow, You’re still online~

The two pictures above show the time required to call ‘persent’.
Same code, same hardware configura

It’s “Present”, not “persent” or “persist” like you wrote above.

Are you running both Windows and Linux natively (not inside a VM)?

There could be performance differences due to differences in drivers, or the way the system works or has been configured. I prefer Linux as my main OS but I have to admit that video performance has not always been up to speed with Windows for me over the years (not talking about SDL specifically).

I still think it would be more useful if you measured the fps (i.e. the number of times your rendering loop runs each second), not the time to call RenderPresent. If you do that, don’t print anything each frame because printing can be slow and unpredictable (doing it once per second to print the fps is probably fine though). It’s possible that printing is slower on Linux or Windows but that is not what you’re trying to meassure.

I don’t think that there should be any confusion, the original test was not testing just linux vs windows, it was linux on older hardware vs windows on hardware that was 2 years newer.

  • The AMD cpu is 24% faster per thread, and has double the cores and threads, so background programs are less likely to mess with your program.
  • The AMD has significantly more L1, L2, and L3 cache RAM, so it will have less of a bottle neck.
  • I’m guessing the RAM on the windows machine is newer, possibly with faster clock rate,
  • The Motherboard, as well, is probably newer. Possibly with more bus bandwidth.
  • The GPU is a 4060 vs 3060. Even if the 4060 has a bad reputation (due to company practices?), it is actually quite a bit faster than the standard 3060.

Two years is still a very long time for computers and hardware, especially crossing between generations and crossing brands.

However, the dual boot provides a more-fair comparison to the two operating systems: they have the same hardware. They are facing the same bottlenecks. Linux is slightly faster in that situation, but not by as big of a gap.

Depending on the version, Linux can often run happily on old machines that windows is to big/new for. Linux has a major speed advantage in that instance, but on a good machine the two operating systems are on pretty equal footing.

Linux renders one frame faster than Windows…
Why is this

There can be any reason. One you can’t easily change is the driver on windows VS the one on linux. There’s a chance windows is using d3d and linux is… well not using d3d. idr if sdl3 defaults to vulkan or gl on x11

This is probably due to the driver and hardware performance, but I compared d3d, d3d11, d3d12, and Vulkan on Windows, and d3d seems to have the best performance.
The performance of OpenGL on Linux is better than Vulkan

I am currently using a fixed frame method, with a fixed refresh rate of 60 frames per second.
Then all video frames are first drawn to the target texture and wait for refresh.

I found that the performance of the d3d renderer on Windows when executing the updatetexture and rendertexture methods is better than that of d3d11 and d3d12 Vulkan

OpenGL performs better than Vulkan on Linux

If you’re trying to measure rendering performance (and rule out that vsync is on) I think it makes sense to remove that limit to see how fast it goes.

Are you still calling SDL_RenderPresent() multiple times per frame?

1 Like

No, I am currently working on drawing the target texture and using SDL_RenderPresent() on a regular basis.