My project involves a fairly large number of SDL_RenderCopy calls, along with a small number of SDL_RenderGeometry calls. On my two Windows10 machines, I observe that performance is radically different when I create my renderer/window with the
settings. I see that the overall rendering time is in the order of 5x slower than when not using these options.
Is it expected that performance with OpenGL is drastically worse than with DirectX? Or could is be that SDL is just using software rendering when I’m asking to use OpenGL? Is there a way I can check this?
SDL_GetRendererInfo() should return this information. I would not have expected such a large difference in performance between DirectX and OpenGL; are you using a texture format that is natively supported by both?
For your reference, here is my FPS observations with different renderers on a GeForce GTX 1650 in a test that draws a lot of lines using SDL_HINT_RENDER_LINE_METHOD “3” and SDL_HINT_RENDER_BATCHING “1”:
If you’re seeing a 5x difference then there’s something going on. It’s not something silly like vsync being on in one test and not the other? If software rendering is happening, it will tell you in SDL_GetRendererInfo() like rtrussell said. Use that method to get the actual renderer in use.
Thanks for the pointer regarding the RenderInfo. This confirms that SDL is indeed using OpenGL. The two reports are:
OpenGL:
Renderer name opengl max texture 16384x16384, texture formats 16161804, 16362004, 16561804, 16762004, 3132564e, 3231564e, 32315659, 56555949
D3D:
Renderer name direct3d max texture 8192x8192, texture formats 16362004, 32315659, 56555949,
So SDL2 is using OpenGL, and the openGL backend supports everything the D3D one does and more. Digging deeper into the performance, it seems that the D3D backend uses batching by default, whereas OpenGL doesn’t, so I set the batching flag on. This improves things, but performance is still 2x better in direct3d.
I should mention that I’m using a low power fanless PC with an intgrated gpu (Intel UHD Graphics 600). Maybe the OpenGL drivers for this unit are just poor compared to D3D?
I believe batching is automatically disabled if you choose anything other than the default renderer (which is D3D in Windows) and must be explicitly selected in that case.
Which SDL version are you using? Some very recent versions - and perhaps the latest git code - might have different OpenGL performance than older versions.
I don’t have a problem as such, I know I’m using a weak GPU. I’m just interested in the apparent difference in performance between the DirectX and OpenGL backends.
Gotcha. Makes sense, as my other PC (10 year old cheap laptop) has integrated Intel GFX as well, and also shows a similar performance difference between D3D and OpenGL.
I noticed similar behaviors a while back with older versions of SDL too. Seems like Intel GPUs perform much better on DX than they do on OpenGL, sometimes twice or even thrice the fps. When I ran the same on a “proper” GPU the difference was more or less gone.