SDL+OpenGL context, glitchy window with integrated graphics

Hi all!

I am creating an Opengl sprite rendering engine using Visual C++, SDL2 and glad. I have encountered the following issue:

If I run the program with my laptop’s NVIDIA GPU (RTX 2060), everything looks OK (1):

If I use the laptop’s integrated graphics (Intel UHD 630), I get (2):

A few notes to help pinpoint the issue:

  • Initially, my window was bigger in both cases due to DPI shenanigans. In Visual Studio, I set DPI awareness to DPI aware per monitor. This made the window normal, but it hasn’t fixed the glitchy issue.
  • My laptop’s Refresh rate is 144 Hz. I changed it to 60Hz but this didn’t solve the issue.
  • Connecting my laptop to another monitor and having both displays active: Same thing.
  • Connecting my laptop to another monitor and having only the second display active: The issue was gone.
  • Running the application fullscreen: The issue was gone.
  • Same setup, but glfw instead of SDL2: Runs OK, no issue.

Any help would be appreciated!

Relevant code:

int main()
{
    SDL_SetMainReady();
    
    SDL_Init(SDL_INIT_VIDEO);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);

    SDL_Window* gWindow = SDL_CreateWindow("SDL app", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_OPENGL);
    SDL_GLContext gContext = SDL_GL_CreateContext(gWindow);
    if (!gladLoadGLLoader((GLADloadproc)SDL_GL_GetProcAddress))
    {
        std::cout << "Failed to initialize GLAD" << std::endl;
        return -1;
    }

    SDL_GL_SetSwapInterval(1);

    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

    glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);

    SHADER::LoadShaders();
    TEXTURE::LoadTextures();
    SPRITE::LoadSprites();

    DRAW::Init();
    srand(1514);
    for (auto i = 0; i < 50; ++i) {
        int32_t depth = (rand() % 500) - 250;
        GLfloat x = (GLfloat)(rand() % 950);
        GLfloat y = (GLfloat)(rand() % 650);
        INSTANCE::Add({ x,y }, depth, SprGetByName("sprite1"), 0);
    }

    bool quit = false;

    SDL_Event e;

    SDL_StartTextInput();

    while (!quit) {
        while (SDL_PollEvent(&e) != 0) {
            if (e.type == SDL_QUIT) {
                quit = true;
            }
            //Handle keypress with current mouse position
            else if (e.type == SDL_TEXTINPUT) {
                int x = 0, y = 0;
                SDL_GetMouseState(&x, &y);
            }
        }

        INSTANCE::DrawEvent();

        //Update screen
        SDL_GL_SwapWindow(gWindow);
    }

    //Disable text input
    SDL_StopTextInput();
    return 0;
}

I don’t see any huge red flags, but a couple of things to be aware of:

First, Nvidia’s OpenGL driver is really permissive and lets you get away with things that other OpenGL implementations won’t. Meanwhile Intel’s OpenGL driver for Windows was flat out broken for a long time, with certain functions just not working at all.

Second, your two calls to glTexParameteri() after glBlendFunc() aren’t doing anything. They work on whatever the currently bound texture is, which at that point in your program is nothing. Put them in whatever code is responsible for creating your GL textures. (or use OpenGL 3.3’s texture sampler objects)

Lastly, what’s going on with the calls to SDL textInput? Maybe I’m wrong, but AFAIK that’s only for when you need the user to be able to enter text, like if you need to bring up the on-screen keyboard on a mobile device. You don’t need it if you just want to know if the player pressed the jump button or something. What happens if you remove the SDL_Start/StopTextInput calls?

First, Nvidia’s OpenGL driver is really permissive and lets you get away with things that other OpenGL implementations won’t. Meanwhile Intel’s OpenGL driver for Windows was flat out broken for a long time, with certain functions just not working at all .

My integrated GPU is pretty recent all things considered (Coffee-Lake generation). I am having a hard time believing the system can’t properly run a basic Opengl application. I have run various games that explicitly state that they use Opengl as their renderer without issues.

Second, your two calls to glTexParameteri() after glBlendFunc() aren’t doing anything. They work on whatever the currently bound texture is, which at that point in your program is nothing. Put them in whatever code is responsible for creating your GL textures. (or use OpenGL 3.3’s texture sampler objects)

Leftover code surviving cleanup to post this.

Lastly, what’s going on with the calls to SDL textInput? Maybe I’m wrong, but AFAIK that’s only for when you need the user to be able to enter text, like if you need to bring up the on-screen keyboard on a mobile device. You don’t need it if you just want to know if the player pressed the jump button or something. What happens if you remove the SDL_Start/StopTextInput calls?

I erased anything having to do with textInput and the problem persists…

The driver, not the GPU itself. Intel’s OpenGL driver has historically been buggy, and sometimes broken (like, calling glBlitSurface() not actually doing anything, though apparently that’s fixed now). The other games you’ve tested might include workarounds for Intel GPUs, or even specific Intel driver versions.

It’s possible that what’s happening is there’s some default value that SDL assumes but GLFW explicity sets when creating the OpenGL context. What happens if you explicity set everything for creating the OpenGL context, like bit depth, etc. Maybe try reading back what you’re actually getting with SDL_GL_GetAttribute() after GL context creation and see if there’s a difference between running on the Nvidia GPU and the Intel one.

Okay! I tried getting some values after the context was created. I started with color buffer RBGA.
Nvidia driver: 8:8:8:0
Intel driver: 10:10:10:2

So I said OK, I’ll set the Nvidia’s values with SDL_GL_SetAttribute before creating the context. And I did! But, when I read these values back after context creation, they haven’t changed! They remain 10:10:10:2. This looks like the HDR format if I remember correctly. I tried disabling HDR video in the windows settings but no luck.

The wiki says: “You should use SDL_GL_GetAttribute() to check the values after creating the OpenGL context, since the values obtained can differ from the requested ones.”

I’m not sure how to proceed at this point.

That shouldn’t be causing the problem, but :man_shrugging:

The reason you’re getting RGBA1010102 instead of RGBA8888 is because SDL only guarantees that you’ll get at least what you request, since, like so many things with OpenGL, what you actually get is up to the underlying driver.

What happens if you do:

SDL_DisplayMode mode;
SDL_GetDisplayMode(0, &mode);
printf("Screen bits per pixel: %d\n", SDL_BITSPERPIXEL(mode.format));

right before the main while loop?

The only computer I have with an Intel GPU is my Mac mini, which doesn’t have Windows on it, or I’d try to whip up a little OpenGL program to test things out myself.

edit: make sure you’re making the GL context the current one, by calling SDL_GL_MakeCurrent() before making any OpenGL calls.

You could try to explicitly request 8 bits for R,G,B and A with SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8); etc (before creating the window)

SDL_GetDisplayMode return 24 bits per pixel.
SDL_GL_MakeCurrent() had no effect.

Sorry for bumping, any help would be appreciated

have you tried what I suggested, what happened?

I wrote a few posts above, explicitly setting bit width didn’t help.

Sorry, but the only Intel GPU I have is in my Mac, which doesn’t have Windows installed.

All I can do is guess.

I have an Intel UHD 615 and everything I’ve tried, using SDL2 + OpenGL, works perfectly.

What are your initial settings?

I’m leaving all the rendering settings at their defaults except for forcing the use of OpenGL (which in turn means re-enabling batching):

	SDL_SetHint(SDL_HINT_RENDER_DRIVER, "opengl");
	SDL_SetHint(SDL_HINT_RENDER_BATCHING, "1");

@vdweller Hi
Do you clear the buffer before drawing?

The OP is using OpenGL directly, not SDL_Renderer. Also they’re using OpenGL 3.3, while the SDL_Renderer OpenGL backend is only OpenGL 2.1 IIRC

edit: @vdweller, what’s the smallest example you can make that still exhibits the weird behavior you’re seeing? Does it happen if all you do is clear the screen? Does it still happen if you draw a single untextured triangle?

edit 2: you should clear the screen unless you have a good reason not to. Some OpenGL drivers take clearing the screen as a hint that you don’t care about preserving the previous contents of the framebuffer, and can save some memory copies depending on the GPU.

I also use OpenGL directly. As I’ve explained before, my app uses SDL’s renderer for 2D graphics but direct calls to OpenGL for 3D graphics and shader programming. None of these exhibit any anomalies on my Intel UHD 615.

@rtrussell If you’re using SDL_Renderer then you’re only getting OpenGL 2.1, though. The OP is using OpenGL 3.3, which isn’t backward compatible with 2.1. Intel’s OpenGL 3.x drivers were broken for a long time, and it’s possible the OP is hitting a driver bug.

I don’t think that follows. My understanding is that if you’re using both SDL’s renderer and direct access to OpenGL, as I am, the versions need not be the same.

For example when my app is running in Android or iOS I use an OpenGLES 1.0 context for SDL’s renderer (because I need glLogicOp()) but OpenGLES 2.0 (or later) when I’m doing the direct access (because I want shader programming capability).

There doesn’t seem to be any conflict, because they are separate contexts. If I add this before I create my OpenGL context, everything still works correctly:

SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);