How do I bugfix surface size in Windows high DPI?

I use Windows and the following code (note: solid color and 100ms delay only for testing purposes):

#define SDL_MAIN_HANDLED
#include <SDL.h>
	int width = 320;
	int height = 240;
int main(int argc, char *argv[]){
	SDL_SetHint("SDL_WINDOWS_DPI_SCALING","1");
	SDL_Window* window = SDL_CreateWindow("scale test", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, width, height, SDL_WINDOW_ALLOW_HIGHDPI);
	SDL_Surface* screenSurface = SDL_GetWindowSurface(window);
	uint32_t* bitmapdata = (uint32_t*)screenSurface->pixels;
	//SDL_GetWindowSizeInPixels(window,&width,&height);
	for(int i=0;i<width*height;i++)bitmapdata[i]=0x996633;
	SDL_UpdateWindowSurface(window);
    while (1){
	SDL_Event event_;
	while (SDL_PollEvent(&event_)){
		if(event_.type==SDL_QUIT){SDL_Quit();exit(0);}
	}
	for(int i=0;i<width*height;i++)bitmapdata[i]=0x996633;
	SDL_UpdateWindowSurface(window);
	SDL_Delay(100);
    }
	return 0;
}

In 96dpi, the testing renderer fills a 320Ă—240 window as expected:
image
However, with 20% smaller pixels (120dpi), there is a 320Ă—240 surface in 400Ă—300 window:
image
Uncommenting SDL_GetWindowSizeInPixels(window,&width,&height); at this point makes it crash, presumably due to attempting to render 400Ă—300 image in 320Ă—240 surface. So how do I get the surface to fill the window no matter how small the pixels are?
(Note: these screenshots were taken in Windows Server 2003 R2 Datacenter x64 Edition, but the same results happen in Windows 10 Pro x64 21H2)

Perhaps you could use screenSurface->w and screenSurface->h instead of width and height in the loops?

thing is, screenSurface->w is 320 and screenSurface->h is 240, even though the window is 400Ă—300 in pixels, I use SDL 2.26.1
And it’s the same glitching in SDL 2.26.2

It looks like the WindowSurface windows backend doesn’t support high DPI properly right now - this is a bug in SDL I believe.

as a workaround you could use WinAPI to disable DPI scaling, see for example dhewm3/win_main.cpp at 477252308d1ea8660e2d040902d58a599122047f · dhewm/dhewm3 · GitHub

(IMO it should be the default behavior that SDL does not do any DPI scaling magic, but just gives you a window at the exact resolution you requested, with mouse coordinates, pixel coordinates and pixel size in window surfaces or opengl framebuffer etc all using physical pixels. Everything else just causes bugs and confusion, apparently even in SDL itself)

SDL_SetHint("SDL_WINDOWS_DPI_SCALING","1"); isn’t the default though, and without this hint it makes it the requested resolution (using physical pixels in Windows Server 2003, and automatically scaling in Windows 10). However, I want to handle pixels of varying sizes, not only 1÷96 inch pixels, and handle the rendering depending on the virtual pixel density.

This doesn’t work across all platforms. For instance, on macOS you can’t request a window size in pixels, and you’ve never been able to. UI widget size, position, etc., has been in floating-point “point” values since literally the first macOS release 20 years ago (which, admittedly, used to map 1:1 to pixel values). Which is why the transition to high DPI displays has been so much easier there.

To the OP: Don’t try to mix high-DPI scaling with SDL_Surface-based drawing (at least not directly to the window surface). If you want to do your own pixel plotting or whatever, use SDL_Renderer. Create a streaming texture, do your drawing to your own buffer in memory, copy that to the texture, then draw the texture.

1 Like

Why not high DPI surface? What’s the problem with window surface having smaller pixels? I use my own renderer to render to the video image in the window. When I use SDL_GetWindowSurface and SDL_UpdateWindowSurface, I intend for them to be equivalent to Win32’s GetDC/CreateCompatibleDC/CreateCompatibleBitmap/SelectObject to create image and SetDIBits/BitBlt to draw image to window respectively. Is the method you’re suggesting any closer to being equivalent to these Win32 functions?

Because, as you can see, it’s buggy.

Software blitting is slow, and IIRC with SDL you only get vsync if you use SDL_Renderer. Furthermore, SDL_Renderer handles high-DPI just fine. And since it’s hardware accelerated, you get scaling, transparency, rotation, etc. for no additional performance cost.

But how is it my code that causes the glitching and not an SDL glitching as was suggested?

What are you talking about? The speed of software rendering is determined entirely by the software being used to render. I can get full 60fps already with my renderers (I test on legacy computers as well) and I use compiler optimizations so it’s just about as “hardware accelerated” as it gets. Furthermore, there is absolutely no way to get transparency for “no additional performance cost”, since it is always going to involve conversions between sRGB and linear to do proper color blending.

I didn’t say your code was buggy. The bug seems to be in SDL.

“hardware accelerated” means the GPU does it. Turning on compiler optimizations isn’t “hardware accelerated”; you’re still doing the work in software.

When it comes to transparency, the GPU does it so quickly that it’s free, at least as far as a 2D application is concerned. (none of the software blitting functions in SDL do colorspace conversion)

So when I say “slow”, I mean “slower than the GPU can do it.”

2 Likes

Then the true, correct bugfix would be to identify the glitching in SDL and bugfix it there, not replace the entire rendering infrastructure of SDL apps just to work around it.

Introducing a GPU dependency is likely going to cause all sorts of compatibility issues, especially with legacy/embedded computers, as opposed to software rendering which is going to be native code for the given platform, especially one that would otherwise go to the e-waste. If the minimum requirements are too high, then what’s even the point?

Yes, it’s a bug that needs to be fixed. But how many apps that use SDL for 2D stuff use the software blitter instead of SDL_Renderer? Not very many.

If your program is for embedded systems with no GPU then why are you worried about high DPI scaling on Windows? Also, there’s a reason SDL_Renderer has a software fallback.

SDL_Renderer works and gets decent performance even on the Raspberry Pi’s little GPU. It’s not like you need a fast or modern one for it to beat the pants of the CPU at 2D drawing. And, again, it has a fallback software rendering backend, so on systems with GPUs you can use 'em, and on systems without it’ll do software rendering, but with the same API.

1 Like

In my experience the use of SDL_UpdateRects/SDL_UpdateWindowSurfaceRects is very important to achieve good performance with “software rendering”. Is there an equivalenet for SDL_Renderer?

Why not? Even Windows 95 supports pixels as small as 1Ă·480 inches, with Windows XP being the first to support SDL2.

There’s a reason I’m rendering with my own software. I have to make sure users get the experience that I intend, so by making my own renderers I am fully in control of what is being rendered. I make C++ functions that write to an array of pixels to render, with SDL_Surface being the interface through which the renderer’s output is displayed on the window. If the video image isn’t being rendered with my code then what’s even the point?

The SDL error could be in https://www.libsdl.org/tmp/SDL/src/video/windows/SDL_windowsframebuffer.c , where info->bmiHeader.biWidth and info->bmiHeader.biHeight are set to sizes in in screen coordinates, window->w and -window->h, instead of the size in pixels as in WIN_GetWindowSizeInPixels`.

For what it’s worth, after looking at SDL’s code I think the same or a very similar problem exists on more platforms than just Windows.

But I haven’t tested it with something that’d reproduce the issue, I’ve only looked at the code.

If you use SDL_Render it can avoid the issue as mentioned above, even if you just copy your own pixel data into a (potentially streaming) texture and draw that texture to the screen.

Probably not, but the one of the nice things about using the GPU is that you don’t need to bother with dirty rects.

And PCs running Windows XP had GPUs. Windows XP was released in 2001. “3D accelerators” had been available for PCs since the mid 90s, and Nvidia’s first Geforce-series GPU came out in 1999.

Something like 15 years ago, back in the SDL 1.x days, when there was no SDL_Renderer and all SDL had as far as built-in drawing was software blitting, somebody wrote glSDL. It was a drop-in replacement for SDL_BlitSurface() and related functions, but it used OpenGL behind the scenes (uploading surfaces as textures, used the GPU for drawing, etc.) and it was so much faster than SDL’s well optimized software blitter. You could scale and rotate sprites with no performance hit, alpha transparency suddenly had a negligible perf cost, and there was no more need to manage dirty rects or any of that.

edit: Even 2D games these days use the GPU. Even retro emulators draw to a memory buffer in software, then upload that as a texture to the GPU and use the GPU to draw it on the screen.

SDL_UpdateWindowSurface doesn’t have rectangles either though.

This legacy SDL software blitter seems like bloatware either on SDL 1.x’s side or on the platform side. In Win32, GDI can get software rendered output directly to the window so it’s about as fast as it gets for Win32. The point is that I don’t want OpenGL or GPU or whatever to decide what gets rendered, I make C++ rendering software so that I get the freedom (actual freedom, not FSF/GNU’s ideology) to decide what gets rendered. I can achieve fast rendering in my software by avoiding bloatware like anti-aliasing and other unnecessary usages of color blending.

The GPU draws what you tell it to, but whatever dude :+1:

1 Like