How do I bugfix surface size in Windows high DPI?

I didn’t say your code was buggy. The bug seems to be in SDL.

“hardware accelerated” means the GPU does it. Turning on compiler optimizations isn’t “hardware accelerated”; you’re still doing the work in software.

When it comes to transparency, the GPU does it so quickly that it’s free, at least as far as a 2D application is concerned. (none of the software blitting functions in SDL do colorspace conversion)

So when I say “slow”, I mean “slower than the GPU can do it.”

2 Likes

Then the true, correct bugfix would be to identify the glitching in SDL and bugfix it there, not replace the entire rendering infrastructure of SDL apps just to work around it.

Introducing a GPU dependency is likely going to cause all sorts of compatibility issues, especially with legacy/embedded computers, as opposed to software rendering which is going to be native code for the given platform, especially one that would otherwise go to the e-waste. If the minimum requirements are too high, then what’s even the point?

Yes, it’s a bug that needs to be fixed. But how many apps that use SDL for 2D stuff use the software blitter instead of SDL_Renderer? Not very many.

If your program is for embedded systems with no GPU then why are you worried about high DPI scaling on Windows? Also, there’s a reason SDL_Renderer has a software fallback.

SDL_Renderer works and gets decent performance even on the Raspberry Pi’s little GPU. It’s not like you need a fast or modern one for it to beat the pants of the CPU at 2D drawing. And, again, it has a fallback software rendering backend, so on systems with GPUs you can use 'em, and on systems without it’ll do software rendering, but with the same API.

1 Like

In my experience the use of SDL_UpdateRects/SDL_UpdateWindowSurfaceRects is very important to achieve good performance with “software rendering”. Is there an equivalenet for SDL_Renderer?

Why not? Even Windows 95 supports pixels as small as 1÷480 inches, with Windows XP being the first to support SDL2.

There’s a reason I’m rendering with my own software. I have to make sure users get the experience that I intend, so by making my own renderers I am fully in control of what is being rendered. I make C++ functions that write to an array of pixels to render, with SDL_Surface being the interface through which the renderer’s output is displayed on the window. If the video image isn’t being rendered with my code then what’s even the point?

The SDL error could be in https://www.libsdl.org/tmp/SDL/src/video/windows/SDL_windowsframebuffer.c , where info->bmiHeader.biWidth and info->bmiHeader.biHeight are set to sizes in in screen coordinates, window->w and -window->h, instead of the size in pixels as in WIN_GetWindowSizeInPixels`.

For what it’s worth, after looking at SDL’s code I think the same or a very similar problem exists on more platforms than just Windows.

But I haven’t tested it with something that’d reproduce the issue, I’ve only looked at the code.

If you use SDL_Render it can avoid the issue as mentioned above, even if you just copy your own pixel data into a (potentially streaming) texture and draw that texture to the screen.

Probably not, but the one of the nice things about using the GPU is that you don’t need to bother with dirty rects.

And PCs running Windows XP had GPUs. Windows XP was released in 2001. “3D accelerators” had been available for PCs since the mid 90s, and Nvidia’s first Geforce-series GPU came out in 1999.

Something like 15 years ago, back in the SDL 1.x days, when there was no SDL_Renderer and all SDL had as far as built-in drawing was software blitting, somebody wrote glSDL. It was a drop-in replacement for SDL_BlitSurface() and related functions, but it used OpenGL behind the scenes (uploading surfaces as textures, used the GPU for drawing, etc.) and it was so much faster than SDL’s well optimized software blitter. You could scale and rotate sprites with no performance hit, alpha transparency suddenly had a negligible perf cost, and there was no more need to manage dirty rects or any of that.

edit: Even 2D games these days use the GPU. Even retro emulators draw to a memory buffer in software, then upload that as a texture to the GPU and use the GPU to draw it on the screen.

SDL_UpdateWindowSurface doesn’t have rectangles either though.

This legacy SDL software blitter seems like bloatware either on SDL 1.x’s side or on the platform side. In Win32, GDI can get software rendered output directly to the window so it’s about as fast as it gets for Win32. The point is that I don’t want OpenGL or GPU or whatever to decide what gets rendered, I make C++ rendering software so that I get the freedom (actual freedom, not FSF/GNU’s ideology) to decide what gets rendered. I can achieve fast rendering in my software by avoiding bloatware like anti-aliasing and other unnecessary usages of color blending.

The GPU draws what you tell it to, but whatever dude :+1:

1 Like

I don’t mean using GPU/library primitives to draw content. I mean that every single pixel rendered is a result of an integer assignment operator in my own render code. For instance in Source for scratch emulator test (coosucks.repl.co) I wrote my own primitives to build a software renderer, including text rendering, and I’m currently working on introducing scalability to the infrastructure. Using GPU text rendering or whatever is going to lead to suboptimal results due to me not having control over how it is rendered. The reason 2D games use GPU is to optimize the gaming experience, and the reason retro emulators use GPU is to emulate other graphical hardware. I mainly make productivity software in which render quality is essential, and the only way I can ensure the necessary render quality for the use case is with my own renderer.

In my own experience, I can achieve whatever quality I want (pixel-perfect if it needs to be) using SDL2’s GPU-accelerated renderer. Have you actually tried it?

Quality of what UI element? There are various elements that could be rendered, ranging from solid colors (which are widely available) to fully scalable pixel-perfect text (which is currently impossible cross platform as it is only in Microsoft’s GDI TrueType renderer, since no other TrueType renderer even comes close to such render quality or performance, let alone both).

If someone as particular about pixel perfection as @rtrussell is (no offense intended) is happy with the quality he gets from SDL_Renderer then everyone should be.

Anyway, this is a dumb thread. SDL has a bug, discovered by someone who doesn’t like using the GPU because reasons. I even told you how you could get around this bug using SDL_Renderer but still be doing the drawing yourself by doing your own pixel plotting to a buffer and then uploading that as a texture and then having SDL_Renderer put your texture on the screen, but :man_shrugging:

No offence taken! I agree. :wink:

You are the one who brought up GPU rendering. And your sarcastic remarks about pixel perfect rendering aren’t doing any favors.

SDL_UpdateWindowSurface doesn’t but SDL_UpdateWindowSurfaceRects does.

If this is indeed an SDL glitching, it should be referred to as Surface glitching.

The bugfix for Surface glitching has been committed to the SDL library source. SDL: Handle DPI scaling in SDL_GetWindowSurface - SDL Commits - Simple Directmedia Layer (libsdl.org)

In SDL 2.28.0, Surface glitching has been finally bugfixed.

SDL 2.27.1:
96dpi:
image
108dpi:
image
120dpi:
image
144dpi:
image
168dpi:
image
192dpi:


216dpi:

240dpi:

288dpi:

336dpi:

SDL 2.28.0:
96dpi:
image
108dpi:
image
120dpi:
image
144dpi:
image
168dpi:
image
192dpi:


216dpi:

240dpi:

288dpi:

336dpi:

2 Likes