Then the true, correct bugfix would be to identify the glitching in SDL and bugfix it there, not replace the entire rendering infrastructure of SDL apps just to work around it.
Introducing a GPU dependency is likely going to cause all sorts of compatibility issues, especially with legacy/embedded computers, as opposed to software rendering which is going to be native code for the given platform, especially one that would otherwise go to the e-waste. If the minimum requirements are too high, then what’s even the point?
Yes, it’s a bug that needs to be fixed. But how many apps that use SDL for 2D stuff use the software blitter instead of SDL_Renderer? Not very many.
If your program is for embedded systems with no GPU then why are you worried about high DPI scaling on Windows? Also, there’s a reason SDL_Renderer has a software fallback.
SDL_Renderer works and gets decent performance even on the Raspberry Pi’s little GPU. It’s not like you need a fast or modern one for it to beat the pants of the CPU at 2D drawing. And, again, it has a fallback software rendering backend, so on systems with GPUs you can use 'em, and on systems without it’ll do software rendering, but with the same API.
Why not? Even Windows 95 supports pixels as small as 1÷480 inches, with Windows XP being the first to support SDL2.
There’s a reason I’m rendering with my own software. I have to make sure users get the experience that I intend, so by making my own renderers I am fully in control of what is being rendered. I make C++ functions that write to an array of pixels to render, with SDL_Surface being the interface through which the renderer’s output is displayed on the window. If the video image isn’t being rendered with my code then what’s even the point?
Probably not, but the one of the nice things about using the GPU is that you don’t need to bother with dirty rects.
And PCs running Windows XP had GPUs. Windows XP was released in 2001. “3D accelerators” had been available for PCs since the mid 90s, and Nvidia’s first Geforce-series GPU came out in 1999.
Something like 15 years ago, back in the SDL 1.x days, when there was no SDL_Renderer and all SDL had as far as built-in drawing was software blitting, somebody wrote glSDL. It was a drop-in replacement for SDL_BlitSurface() and related functions, but it used OpenGL behind the scenes (uploading surfaces as textures, used the GPU for drawing, etc.) and it was so much faster than SDL’s well optimized software blitter. You could scale and rotate sprites with no performance hit, alpha transparency suddenly had a negligible perf cost, and there was no more need to manage dirty rects or any of that.
edit: Even 2D games these days use the GPU. Even retro emulators draw to a memory buffer in software, then upload that as a texture to the GPU and use the GPU to draw it on the screen.
SDL_UpdateWindowSurface doesn’t have rectangles either though.
This legacy SDL software blitter seems like bloatware either on SDL 1.x’s side or on the platform side. In Win32, GDI can get software rendered output directly to the window so it’s about as fast as it gets for Win32. The point is that I don’t want OpenGL or GPU or whatever to decide what gets rendered, I make C++ rendering software so that I get the freedom (actual freedom, not FSF/GNU’s ideology) to decide what gets rendered. I can achieve fast rendering in my software by avoiding bloatware like anti-aliasing and other unnecessary usages of color blending.
I don’t mean using GPU/library primitives to draw content. I mean that every single pixel rendered is a result of an integer assignment operator in my own render code. For instance in Source for scratch emulator test (coosucks.repl.co) I wrote my own primitives to build a software renderer, including text rendering, and I’m currently working on introducing scalability to the infrastructure. Using GPU text rendering or whatever is going to lead to suboptimal results due to me not having control over how it is rendered. The reason 2D games use GPU is to optimize the gaming experience, and the reason retro emulators use GPU is to emulate other graphical hardware. I mainly make productivity software in which render quality is essential, and the only way I can ensure the necessary render quality for the use case is with my own renderer.
Quality of what UI element? There are various elements that could be rendered, ranging from solid colors (which are widely available) to fully scalable pixel-perfect text (which is currently impossible cross platform as it is only in Microsoft’s GDI TrueType renderer, since no other TrueType renderer even comes close to such render quality or performance, let alone both).
If someone as particular about pixel perfection as @rtrussell is (no offense intended) is happy with the quality he gets from SDL_Renderer then everyone should be.
Anyway, this is a dumb thread. SDL has a bug, discovered by someone who doesn’t like using the GPU because reasons. I even told you how you could get around this bug using SDL_Renderer but still be doing the drawing yourself by doing your own pixel plotting to a buffer and then uploading that as a texture and then having SDL_Renderer put your texture on the screen, but