SDL_GL_SwapBuffer() + vsync randomly caus es high CPU usage--or not

Borislav Trifonov <bdt shaw.ca> writes:

A single-threaded program I’m testing with main loop having SDL_GL_SwapBuffer
() in it causes around
25-35% CPU usage according to the Windows Task Manager’s Performance tab.

If I force vsync in the NVIDIA driver, somewhat more than half the times I
run the program, the CPU usage is
near 1% as I expect and want it to be (throughout the run). The other times,
though, it’s around 27% (one core
near maxed out–throughout the run)–and it’s this latter case that I can’t
make any sense of, and is a
problem since I will be adding other threads and such waste is going to be an
issue. There’s no difference in
input for each run. Removing all other SDL_…() calls in the loop also
makes no difference. Running other
threads created with SDL makes no difference. The fact that I don’t get a
consistent result has me baffled…

The only idea I had is that it could be a sampling artifact of the Task
Manager’s measurement, but I think
that’s not likely. I don’t even know how to begin troubleshooting here :frowning:

I think you’re experiencing a well-known ‘problem’ in SDL. I say 'problem’
since it can be or can not be considered as a problem. If you check for events
in SDL, you call SDL_PollEvent for example. SDL_GL_SwapBuffers sends a command
to OpenGL saying ‘hey! You can flip the buffers and render now, I’ve finished
inputting command’. In the meantime your program continues, where SDL_PollEvent
hoggs up every available millisecond to check for events, the more actions you
put in your main loop, the less the PollEvent function will have time to check.

Some consider it as a ‘problem’, where others say “You’re running a game most
likely anyway, the user isn’t doing anything in the meantime, so why not direct
the processor to the game?”. Sometimes putting a small SDL_Delay at the end of
the game loop (2 - 5 milliseconds or so) causes the processor usage to remain
stable, but this might create an undesired effect.