SDL_PollEvent()

Within my main loop I’m using SDL_GetKeyState() to check user input. In
order for GetKeyState to work, it seems that SDL_PollEvent() must also be
called with the main loop. However, I’m using opengl in a window (not
fullscreen) and whenever I move the window around (on linux using
WindowMaker as my window manager) the loop freezes for a few seconds at
the SDL_PollEvent() call, even after I stop moving the window. The amount
of time that the SDL_PollEvent() takes appears to be proportional to how
long it took the window to move. If this actually made sense, what’s
causing SDL_PollEvent() to take so long, and is there a way around
it? I’ve tried filtering out only keyboard, mouse, and quit events, but
that didn’t seem to work. Thanks!

  • daerhu

most of us are using SDL_PollEvent() in our “main” loop, and ive seen some
workarounds for things like checking network traffic etc. you might as well
just check those buffers in serial, since thats what SDL_PollEvent() does
anyway.

im telling you all this because i thought it was something like a select()
or poll() loop and wanted to make a function to add another filehandle to
it. adding to SDL_PumpEvents might make the code cleaner to look at, but
wouldnt really buy anything. (thought about having pumpevents go through a
linked list of function pointers so you could just keep adding your own, but
thats silly when you think of the final result being serially checked
anyway)

would making it a poll/select call be bad? or just too much work for the
gain? i like to make light apps, that only hit the cpu when they need to by
waiting on input until its time to swap buffers.

Hello pixel,

Thursday, November 2, 2006, 11:55:54 PM, you wrote:

most of us are using SDL_PollEvent() in our “main” loop, and ive seen some
workarounds for things like checking network traffic etc. you might as well
just check those buffers in serial, since thats what SDL_PollEvent() does
anyway.

im telling you all this because i thought it was something like a select()
or poll() loop and wanted to make a function to add another filehandle to
it. adding to SDL_PumpEvents might make the code cleaner to look at, but
wouldnt really buy anything. (thought about having pumpevents go through a
linked list of function pointers so you could just keep adding your own, but
thats silly when you think of the final result being serially checked
anyway)

would making it a poll/select call be bad? or just too much work for the
gain? i like to make light apps, that only hit the cpu when they need to by
waiting on input until its time to swap buffers.

What about SDL_WaitEvent() ? Maybe what this function needs is a
time-out period like select() does?–
Best regards,
Peter mailto:@Peter_Mulholland

Have you considered SDL_WaitEvent()?

ChrisOn 11/2/06, pixel fairy wrote:

would making it a poll/select call be bad? or just too much work for the
gain? i like to make light apps, that only hit the cpu when they need to by
waiting on input until its time to swap buffers.


E-Mail: Chris Nystrom <@Chris_Nystrom>
http://www.newio.org/
AIM: nystromchris

[…]

would making it a poll/select call be bad? or just too much work for
the gain? i like to make light apps, that only hit the cpu when they
need to by waiting on input until its time to swap buffers.

I think the problem is that many platforms have several different APIs
that don’t?mix when it comes to select() or similar.

What you can do is create separate threads that do the blocking I/O,
and have these threads communicate with the main thread through a
single IPC mechanism (like the SDL event system), allowing all
threads to block properly on their respective APIs.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Friday 03 November 2006 00:55, pixel fairy wrote:

would making it a poll/select call be bad? or just too much work for the
gain? i like to make light apps, that only hit the cpu when they need to
by waiting on input until its time to swap buffers.

The problem is twofold:

  1. SDL needs to poll a few things that are unrelated, like timers,
    joysticks, window events, etc, so you probably couldn’t push them all
    into one select() without really messing up the internals of SDL.

  2. Most systems SDL targets don’t have a select() that can target all
    the things that we poll for…some don’t even have a select() call.
    Even on Unix, we can’t promise that everything we’d wait on would be a
    real file handle on any given run of the program.

  3. (“Twofold”?) … if you want to select() with a timeout, then draw
    some stuff and swap buffers, you could just poll, swap buffers, and
    sleep for some amount of time…and you still wouldn’t notice the event
    loop in a CPU profiling.

That being said, if you don’t have to redraw, SDL_WaitEvent() could
possibly do what you want…there’s a patch in Bugzilla to make it do
the right thing in most cases; largely this was for embedded devices
that want to put the CPU to sleep until it’s definitely needed. For a
desktop system, I really wouldn’t bother.

 http://bugzilla.libsdl.org/show_bug.cgi?id=323

SDL_WaitEvent() does not specify a timeout (hmm…maybe something like
it should for SDL 1.3, though…).

–ryan.

the whole point was making the app light on the cpu. i ended up using
SDL_Delay like this

while(1) {
check_events()
update_state()
draw(backbuffer)
delay()
SDL_GL_SwapBuffer()
}

i think that swapping buffers at frame update time would be
more consistant since the time to draw is more variable, but
it probably doesnt make much differnce. havent tested this yet.On 11/8/06, David Olofson wrote:

On Friday 03 November 2006 00:55, pixel fairy wrote:
[…]

would making it a poll/select call be bad? or just too much work for
the gain? i like to make light apps, that only hit the cpu when they
need to by waiting on input until its time to swap buffers.

I think the problem is that many platforms have several different APIs
that don’tmix when it comes to select() or similar.

What you can do is create separate threads that do the blocking I/O,
and have these threads communicate with the main thread through a
single IPC mechanism (like the SDL event system), allowing all
threads to block properly on their respective APIs.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Then we’re talking about a more or less completely different problem.
SDL_GL_SwapBuffer() is supposed to block as needed to keep the loop
in sync with the display refresh, but unfortunately, you cannot rely
on this to work everywhere.

If you get 100% CPU load with the above loop, it’s because something
prevents SDL_GL_SwapBuffer() from synchronizing with the display
refresh. It could be a broken driver, or an incorrectly configured
driver. Some old drivers don’t implement retrace sync at all, whereas
hhe latter is commonly done deliberately by players of certain games,
due to game logic bugs that favor players with insane frame rates.
Drivers can usually be configured to override anything applications
say, so there’s no reliable way that an application can enforce
retrace sync, even if the driver supports it.

The really bad news is many drivers install with retrace sync disabled
by default, and the vast majority of users have no clue as to what
this is all about. :frowning:

What you’re doing above is the last resort, and it’s about all you can
do, short of just having users of broken systems pay with 100% CPU
load.

The problem with a delay in the render loop is that it often results
in less smooth animation on properly configured systems (it just
interferes with the retrace sync), so you should probably make it
optional, or possibly switch it on automatically when needed.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 09 November 2006 15:00, pixel fairy wrote:

the whole point was making the app light on the cpu. i ended up
using SDL_Delay like this

while(1) {
check_events()
update_state()
draw(backbuffer)
delay()
SDL_GL_SwapBuffer()
}

i think that swapping buffers at frame update time would be
more consistant since the time to draw is more variable, but
it probably doesnt make much differnce. havent tested this yet.

Then we’re talking about a more or less completely different problem.
SDL_GL_SwapBuffer() is supposed to block as needed to keep the loop

woohoo!

in sync with the display refresh, but unfortunately, you cannot rely

on this to work everywhere.

doh!

If you get 100% CPU load with the above loop, it’s because something

prevents SDL_GL_SwapBuffer() from synchronizing with the display
refresh. It could be a broken driver, or an incorrectly configured
driver. Some old drivers don’t implement retrace sync at all, whereas
hhe latter is commonly done deliberately by players of certain games,
due to game logic bugs that favor players with insane frame rates.
Drivers can usually be configured to override anything applications
say, so there’s no reliable way that an application can enforce
retrace sync, even if the driver supports it.

The really bad news is many drivers install with retrace sync disabled
by default, and the vast majority of users have no clue as to what
this is all about. :frowning:

What you’re doing above is the last resort, and it’s about all you can
do, short of just having users of broken systems pay with 100% CPU
load.

The problem with a delay in the render loop is that it often results
in less smooth animation on properly configured systems (it just
interferes with the retrace sync), so you should probably make it
optional, or possibly switch it on automatically when needed.

i like this idea. whats a good way to detect if retrace sync is disabled?

//David Olofson - Programmer, Composer, Open Source Advocate>

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

[…]

The problem with a delay in the render loop is that it often
results
in less smooth animation on properly configured systems (it just
interferes with the retrace sync), so you should probably make it
optional, or possibly switch it on automatically when needed.

i like this idea. whats a good way to detect if retrace sync is
disabled?

Hairy business… Since there is no reliable or portable way of just
asking the driver, all you can do is analyze the timing and try to
make some sense of it. Unless you’re on a hard real time OS, there
will likely be loads of timing jitter everywhere. Also, you’d
probably have to do the testing with very minimal rendering, just in
case hardware/resolution/scene complexity of the game is too much
for “full frame rate” as in “one rendered frame for each display
refersh”.

Basically, if you collect timestamps taken right after the
SDL_GL_SwapBuffers() call, modulo some jitter, you should see a
granularity corresponding to the display refresh rate. (Of course,
you’ll have to vary the rendering load a bit during the test, to
avoid confusing your “unsynced” frame rate with the refresh rate
you’re looking for.)

If you’re rendering one frame per refresh, then gradually increasing
the rendering load should have no effect, until rendering a frame
takes more than one refresh period, at which point the frame rate is
halved.

Another thing you might want to analyze (simpler, perhaps) is the
execution times of the SDL_GL_SwapBuffers() call. (That is, compare
before/after timestamps.) If these are about the same all the time,
regardless of scene complexity (rendering load), you’re not retrace
sync’ed.

Note that the execution times may be anything from virtually zero for
a true hardware page flip (the proper way) without retrace sync,
through several milliseconds for a brute force back-to-front blit
copy (the way it’s usually done in windowed mode on most platforms).

Alternatively, if there’s a reliable way of finding out how much CPU
time your application consumes, you can just check that. (Some
operating systems may have “funny” ways of measuring CPU load…) If
it’s practically 100% at all times, you’re most likely on a display
setup without retrace sync.

Or, just throw in a manual “Frame Rate Throttling” option. :slight_smile: I did
this in Kobo Deluxe - although that was really intended for saving
battery on laptops. (Kinda’ pointless running the CPU and GPU full on
to push 300+ fps when the TFT can’t handle more than 60 fps or so, at
best.) It just has the side effect of also being useful when an
annoyed OS scheduler decides block the game for extended periods of
time, to allow background processes to run.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 09 November 2006 21:52, pixel fairy wrote:

Hi,

my loop looks like this:

while(!quit)
{
uint32 t = SDL_GetTicks() + 20;

handleAllTheThings();
drawAllTheThings();
SDL_GL_SwapBuffers();

int32 timetowait = t - SDL_GetTicks();
if (timetowait > 0)
SDL_Delay(timetowait);
}

Didn’t look for my exact code, but this is what it should be. This loop
makes one iteration to last at least 20ms, resulting in 50fps. The
downside is, that this code only works properly if a performance timer
is present.

Matthias

David Olofson schrieb:> On Thursday 09 November 2006 21:52, pixel fairy wrote:

[…]

The problem with a delay in the render loop is that it often
results
in less smooth animation on properly configured systems (it just
interferes with the retrace sync), so you should probably make it
optional, or possibly switch it on automatically when needed.

i like this idea. whats a good way to detect if retrace sync is
disabled?

Hairy business… Since there is no reliable or portable way of just
asking the driver, all you can do is analyze the timing and try to
make some sense of it. Unless you’re on a hard real time OS, there
will likely be loads of timing jitter everywhere. Also, you’d
probably have to do the testing with very minimal rendering, just in
case hardware/resolution/scene complexity of the game is too much
for “full frame rate” as in “one rendered frame for each display
refersh”.

Basically, if you collect timestamps taken right after the
SDL_GL_SwapBuffers() call, modulo some jitter, you should see a
granularity corresponding to the display refresh rate. (Of course,
you’ll have to vary the rendering load a bit during the test, to
avoid confusing your “unsynced” frame rate with the refresh rate
you’re looking for.)

If you’re rendering one frame per refresh, then gradually increasing
the rendering load should have no effect, until rendering a frame
takes more than one refresh period, at which point the frame rate is
halved.

Another thing you might want to analyze (simpler, perhaps) is the
execution times of the SDL_GL_SwapBuffers() call. (That is, compare
before/after timestamps.) If these are about the same all the time,
regardless of scene complexity (rendering load), you’re not retrace
sync’ed.

Note that the execution times may be anything from virtually zero for
a true hardware page flip (the proper way) without retrace sync,
through several milliseconds for a brute force back-to-front blit
copy (the way it’s usually done in windowed mode on most platforms).

Alternatively, if there’s a reliable way of finding out how much CPU
time your application consumes, you can just check that. (Some
operating systems may have “funny” ways of measuring CPU load…) If
it’s practically 100% at all times, you’re most likely on a display
setup without retrace sync.

Or, just throw in a manual “Frame Rate Throttling” option. :slight_smile: I did
this in Kobo Deluxe - although that was really intended for saving
battery on laptops. (Kinda’ pointless running the CPU and GPU full on
to push 300+ fps when the TFT can’t handle more than 60 fps or so, at
best.) It just has the side effect of also being useful when an
annoyed OS scheduler decides block the game for extended periods of
time, to allow background processes to run.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

David Olofson wrote:

SDL_GL_SwapBuffer() is supposed to block as needed to keep the loop
in sync with the display refresh, …

It’s only supposed to do this if you explicitly enable it using
’SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 1);’.

-Christian