Multiple windows, multiple renderers — why?

SDL so far is structured so that each window must have its own renderer. This is extremely irritating, because the renderer is not only used to render the contents of the window, but is also mandatory when creating and loading textures (it is something like a door to the GPU and VRAM).

The task of the renderer is to enable access to the GPU, i.e. creating textures and rendering content on the selected target. It has nothing to do with windows, but SDL forces the renderer to be bound to the window. Why does SDL force a window handle in the function that creates the renderer?

The third problem is that the renderer functions must be called from within the application’s main thread, so these renderers must be used sequentially anyway. So why create many of them when one would be enough?

Fourth problem — renderer configuration (e.g. sampling algorithm) is performed from the level of the SDL_SetHint function, without specifying the renderer to which these settings apply. And not only is it unknown which renderer these hints are about, but in order for them to be applied, it is necessary to destroy and recreate the renderers, which is bizarre. Why is the renderer configuration done with a generic and meaningless SDL_SetHint function instead of a specific function like SDL_SetRenderScaleQuality?

I don’t understand why it looks like this, the issues of renderers are completely unclear and illogical to me, they mislead me and make it difficult to write the game code. While work on SDL3 is still ongoing, there is time to introduce significant changes in this topic.


If it is possible, I propose a total change in the subject of renderers.

  • Renderer development shouldn’t be related to windows in any way — the renderer should actually be created as a door to the GPU and VRAM and that’s it. It should be possible to create one renderer and use it to render the contents of any number of game windows and also to render back buffers (target as any texture).

  • It should be possible to create multiple renderers in case they are to be configured differently or used in multiple threads (as long as multi-threaded rendering is supported by SDL).

  • Renderers can target textures or windows, which is the same as now, but targeting should be done so that you can specify a window or texture handle (one function with flag specifying the type of target or two separate functions, e.g. SDL_SetRenderTargetWindow and SDL_SetRenderTargetTexture).

  • Changing the renderer configuration should be based on the renderer handle and implemented using dedicated functions, e.g. SDL_SetRenderScaleQuality, not from the level of SDL_SetHint.

  • Changing the renderer configuration (e.g. scale quality) cannot force the user to recreate either the renderers or the textures.

In this way, the use of renderers will be simple, intuitive and logical, as well as convenient and will allow for better care and avoid creating unnecessary dependencies (window↔renderer) in the game code.

What do you think about it?

Some graphics APIs which SDL_Render can use internally, e.g. OpenGL and OpenGL ES, have a graphics context which is completely tied to a window. If SDL_Render dropped those backends and only used modern APIs like Metal, Vulkan, and DirectX 12 it could have a more flexible design. But SDL_Render is designed to work on pretty much all systems people might use, which includes those that don’t support those new APIs.

Even if it theoretically only supported Vulkan, it might still be a good idea to design it to specify a window when creating a renderer so the backend can select the most appropriate GPU for that window.

Similarly, window APIs on some platforms can only be used on the main thread, and SDL_Render hooks into some window events in order to support resizing and some other things - and graphics APIs like OpenGL can only have a single active context on a given thread. Backend limitations like that heavily inform what SDL_Render can do with threading, since it’s meant to work consistently across all platforms.

Aside from the above issue with GL / GLES, all graphics backends have some specific interactions with a window in order to have a backbuffer surface in that window - it’s not as simple as just creating a regular texture.