SDL_GetWindowSizeInPixels

Since SDL_GetWindowSizeInPixels() is available only in SDL 2.26.0 and later, can I assume that, despite its name, it’s OK to call SDL_GL_GetDrawableSize() even when not using the OpenGL backend (e.g. in Windows when using the Direct3D backend)?

I don’t know the answer to your question, but does SDL_GetRendererOutputSize() work as a safe and portable alternative to SDL_GL_GetDrawableSize()?

Not in general, because it will return the size of the texture selected as the current render target, and that’s not necessarily the same size as the window (it typically isn’t in my app).

Possibly it will in the special case that the render target is set to NULL, but I haven’t checked (and it may not be convenient to change the target). So definitely not a drop-in replacement anyway.

After the renderer is first created with SDL_CreateRenderer(), and before a different target is set, SDL_GetRendererOutputSize() will provide the window dimensions in pixels. You can compare those dimensions to the ones provided by SDL_GetWindowSize() to calculate and store a scalar that from then on can be multiplied with the output dimensions of SDL_GetWindowSize() to get current window dimension in pixels.

Alternatively, you could compare the dimensions provided by SDL_QueryTexture() on your render target to those provided by SDL_GetRendererOutputSize() to get the same scalar – but doing this calculation more than once seems unnecessary unless system DPI is changing programmatically.

Agree it’s not quite a drop-in replacement though, and there might be use cases I haven’t considered.

As an optimization, I would almost always recommend storing the width/height of the window as global variables instead of calling a function.

Of course width and height should be set when the window and renderer are first created, and the SDL_WINDOWEVENT should be handled to reset them. See case SDL_WINDOWEVENT_RESIZED and case SDL_WINDOWEVENT_SIZE_CHANGED in the WindowEvent link example: → SDL2/SDL_WindowEvent - SDL Wiki. Width and height are in the data variables as shown in that example.

My reasoning is that variable assignment/access is much faster than calling a function, and event handling means you can access that data specifically when it is changed.

In reality this is only an optimization if the access happens often or in a loop. (it only saves 2-20 cycles per access)

A concern with that approach is the asynchronous nature of events, meaning that it’s telling you not the window size now but at some indeterminate previous time.

This is further complicated in my application because I use the event queue for inter-thread communication, so there could conceivably be a deadlock between sending the window size in one message and receiving it in another!

I can’t help thinking that the new SDL_GetWindowSizeInPixels() function wouldn’t have been added if there was an entirely satisfactory method already available.

SDL_GetWindowSizeInPixels was added mostly for reasons that don’t involve SDL_Render: SDL_GetRendererOutputSize already works fine there, but if SDL_Render is not being used at all, then there was a separate API with the same functionality for each graphics API (SDL_GL_GetDrawableSize, SDL_Metal_GetDrawableSize, and SDL_Vulkan_GetDrawableSize) so it made a lot of sense to combine them into something you don’t need to branch on.

tl;dr: it’s good to use SDL_GetRendererOutputSize if you’re using SDL_Render.

@rtrussell: That’s an interesting argument, let me go poke around in the source code. Not sure if there’s thread safety built in there. I don’t think it gets the size update much faster than the event poll, but I can tinker around to figure out when it gets the update.
I’ll be back in an hour or two with any findings.

About SDL_GetWindowSizeInPixels():
I’m not seeing any atomics or mutex locks or any other thread-safety mechanisms directly in the source. Upon further thought, it wouldn’t since it’s written in pure C language.
Slime is correct that there are different versions that the library will link to per platform:

On Windows it links SDL_GetWindowSizeInPixel to WIN_GetWindowSizeInPixels which in turn calls the WinAPI function GetClientRect(hndl, &rect), which means it does get the data directly from the kernel. Technically this would mean it’s going to be sooner than from the event loop, and maybe that also makes it thread safe[?].

For platforms that don’t have these “link-to macros”, it basically boils down to w = window->w * mode->pixelDensity where window is provided as an argument and mode is SDL_DisplayMode that changes based on if we’re in desktop mode or fullscreen mode which may have changed the resolution. As far as this version: To me this means that no new information is being fetched here; The window structure is not updated when you call the function. I don’t know exactly when that window data gets its update.

About SDL_GL_GetDrawableSize():
The only reason I would avoid SDL_GL_GetDrawableSize() is that it has been removed in SDL3. I understand that you don’t intend to go past 2.26.0, so that’s not going to be an issue.

I did find this direct quote in the SDL_video.h header comments:

If the window is created with the `SDL_WINDOW_ALLOW_HIGHDPI` flag, its size
in pixels may differ from its size in screen coordinates on platforms with
high-DPI support (e.g. iOS and macOS). Use SDL_GetWindowSize() to query the
client area's size in screen coordinates, and SDL_GL_GetDrawableSize() or
SDL_GetRendererOutputSize() to query the drawable size in pixels. Note that
when this flag is set, the drawable size can vary after the window is
created and should be queried after major window events such as when the
window is resized or moved between displays.

So I would say that it should be perfectly safe as long as the platform supports OpenGL/OpenGLES 2.0 or later (It’s almost difficult to find a machine that doesn’t meet this requirement).

Woops, the other part of the question was if you need to initialize the OpenGL backend.

I can at least confirm that SDL_GL_GetDrawableSize() returns the expected size without any other initialization of OpenGL in the program on Ubuntu Linux.

(I want to assume that is also true on most platforms, but I can only test on got what I have)

I’m using SDL_Render, but the render target is almost never NULL. So to replace my existing calls to SDL_GL_GetDrawableSize() I would need code like this:

target = SDL_GetRenderTarget(renderer);
SDL_SetRenderTarget(renderer, NULL);
SDL_GetRendererOutputSize(renderer, &w, &h);
SDL_SetRenderTarget(renderer, target);

I’m certainly not inclined to make that change!

Does this work?

#include <stdio.h>
#include <stdbool.h>
#include "SDL.h"

#define DEFAULT_WINDOW_FLAGS SDL_WINDOW_RESIZABLE | SDL_WINDOW_ALLOW_HIGHDPI
#define DEFAULT_RENDER_FLAGS SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC


void init_SDL()
{
    if (SDL_Init(SDL_INIT_VIDEO) != 0) {
        fprintf(stderr, "\nError initializing SDL: %s\n", SDL_GetError());
        exit(1);
    }
}

/* Store quotient of draw dims / pixel dims as global variable */
uint8_t dpi_scale_factor = 0;

/* Run this only once, immediately after renderer is created.
    (Or anytime DPI changes, taking care to set the rend target to null first) */
void set_dpi_scale_factor(SDL_Renderer *rend, SDL_Window *win)
{
    int rw, rh;
    int ww, wh;
    SDL_GetRendererOutputSize(rend, &rw, &rh);
    SDL_GetWindowSize(win, &ww, &wh);

    dpi_scale_factor = rw / ww;
    if (rh / wh != dpi_scale_factor) {
        fprintf(stderr, "Error: width scale != height scale\n");
    }
}

/* Replaces SDL_GL_GetDrawableSize() in your app */
void get_window_size_pixels(SDL_Window *win, int *w, int*h)
{
    SDL_GetWindowSize(win, w, h);
    (*w) *= dpi_scale_factor;
    (*h) *= dpi_scale_factor;
}

int main()
{
    init_SDL();

    SDL_Window *win = SDL_CreateWindow("Test Window", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 500, 500, (Uint32)DEFAULT_WINDOW_FLAGS);
    SDL_Renderer *rend = SDL_CreateRenderer(win, -1, (Uint32)DEFAULT_RENDER_FLAGS);
    
    /* Run this once */
    set_dpi_scale_factor(rend, win);

    /* Window dimensions in pixels */
    int win_w_pixels, win_h_pixels;


    get_window_size_pixels(win, &win_w_pixels, &win_h_pixels);
    printf("Window size in pixels: %d X %d\n", win_w_pixels, win_h_pixels);
    
    bool quit = false;
    while (!quit) {
        SDL_Event e;
        while (SDL_PollEvent(&e)) {
            if (e.type == SDL_QUIT) {
                quit = true;
            } else if (e.type == SDL_WINDOWEVENT && e.window.event == SDL_WINDOWEVENT_SIZE_CHANGED) {
                get_window_size_pixels(win, &win_w_pixels, &win_h_pixels);
                printf("Window size in pixels: %d X %d\n", win_w_pixels, win_h_pixels);

            }
        }
        SDL_SetRenderDrawColor(rend, 0, 0, 0, 0);
        SDL_RenderClear(rend);
        SDL_RenderPresent(rend);
        SDL_Delay(1);
    }
}

This is what I was trying to articulate in my last comment, but may not have been clear in text.

The dpi_scale_factor would have to be a float or a double, surely, since there’s no guarantee that it will be an integer (and it typically won’t be).

Isn’t it also the case that the scale factor can change if the window is moved from one monitor to another? If so you can’t simply measure it once, during initialisation.

dpi_scale_factor has always been either 1 or 2 for me (even if declared as a float), but yes, you’re probably right – it can be a float or double.

I think you can do the initialization once per monitor, have one dpi_scale_factor per monitor, and include a call to SDL_GetWindowDisplayIndex() in the get_window_size_pixels() function to decide which dpi_scale_factor to use.

(It’d make sense to put monitor information, including the dpi_scale_factor, into a struct.)

@rtrussell I am curious why not join your threads when a resize event is triggered? (Considering how often it happens in the lifetime of the program). Then you can guarantee that the variables are updated.

Also: I’m sorry for mentioning optimization of any type. There’s no point in worrying over possibly a couple dozen cycles per frame unless you are maxing out the CPU. It really only matters if you end up calling the function more than a thousand times per frame or if your machine is running in the Mhz range.

Note: My previous posts were quite messy, I want to restate that SDL_GL_GetDrawableSize should work in almost every circumstance without any other initialization of the openGL library on your part.

Honestly, I think it will be a lot easier and less hassle to carry on doing what I do now - call SDL_GL_GetDrawableSize()! My app is highly dependent on the backend being OpenGL or OpenGLES, several of the facilities I support (e.g. AND/OR/XOR plotting, 3D graphics, shader graphics) simply won’t work without it.

My only concern was that selecting an OpenGL backend (e.g. in Windows) is just a hint, there’s (in principle at least) no guarantee that you will actually get one. So ideally I would prefer a more controlled fallback whereby only those facilities dependent on OpenGL stop working, rather than the app aborting immediately.

But it seems that would be a lot of work, with considerable risk and little benefit. As has been said, at the moment all the platforms I support still provide an OpenGL backend, at least as an option, so in practice no fallback is required. I can revisit this if and when that changes.

2 Likes

The DPI scaling behavior is very highly dependent on the platform. In Windows NT 3.5 and up, the dpi is arbitrary integer with 96dpi corresponding to 1 scaling factor, so the Windows scaling factor is multiple of 1÷96. The multiple monitor behavior is particularly platform-specific; in Windows NT 3.5 to XP, the DPI is a system-wide setting and only changes by reboot, not by multiple monitors. It wasn’t until later Windows versions when the concept of multiple DPI-awareness levels was introduced, making DPI scaling way more complicated there, with multiple monitor scaling (hot-swapping DPI values) being introduced in Windows 8.1 in a new DPI-awareness level. However, initialization once per monitor might not necessarily work since the user could change DPI without changing monitor. In platforms that do not support hot-swapping DPI values, the DPI scaling factor is constant, whereas in platforms that support it, the only way to reliably handle hot-swapping DPI values is to specifically handle DPI changing events.

2 Likes