Proper SDL 1.3 Event Loop

Hi,

I have a hard time creating an event loop that runs at some roughly
constant speed and that behaves the same on OS X and windows. Currently
my loop looks something like this:

void pd::game::run()
{
SDL_Event evt;
uint32_t old_ticks = 0;

 while (m_running) {
     float dt = (SDL_GetTicks() - old_ticks) / 1000.0f;
     old_ticks = SDL_GetTicks();
     if (dt > 0.1f)
         dt = 1.0f / fps_limit;

     while (SDL_PollEvent(&evt))
         handle_event(evt, dt);
     update(dt);
     render(dt);
     SDL_GL_SwapWindow(m_win);

     float diff = (1.0f / fps_limit) - dt;
     uint32 delay = (uint32)std::max(0.0f, diff * 1000.0f) +
         m_last_delay;
     m_last_delay = delay;
     if (delay > 0)
         SDL_Delay(delay);
 }

}

What is a proper SDL mainloop supposed to look like? Because this is
obviously broken. First of all vsync seems to hold the loop on OS X
after swap window, but it does not do anything on windows where I will
run so fast that dt is zero. My artificial limiting after swap window
then causes stuttering and different dts on windows and OS X causing my
simulations to run completely different on these two systems.

Regards,
Armin

Aren’t you essentially doing:
delay = m_last_delay;
m_last_delay += delay;
SDL_Delay(delay);

I can see how that would cause timing issues.

Jonny DOn Sat, Apr 2, 2011 at 6:58 PM, Armin Ronacher <armin.ronacher at active-4.com> wrote:

Hi,

I have a hard time creating an event loop that runs at some roughly constant
speed and that behaves the same on OS X and windows. ?Currently my loop
looks something like this:

void pd::game::run()
{
? ?SDL_Event evt;
? ?uint32_t old_ticks = 0;

? ?while (m_running) {
? ? ? ?float dt = (SDL_GetTicks() - old_ticks) / 1000.0f;
? ? ? ?old_ticks = SDL_GetTicks();
? ? ? ?if (dt > 0.1f)
? ? ? ? ? ?dt = 1.0f / fps_limit;

? ? ? ?while (SDL_PollEvent(&evt))
? ? ? ? ? ?handle_event(evt, dt);
? ? ? ?update(dt);
? ? ? ?render(dt);
? ? ? ?SDL_GL_SwapWindow(m_win);

? ? ? ?float diff = (1.0f / fps_limit) - dt;
? ? ? ?uint32 delay = (uint32)std::max(0.0f, diff * 1000.0f) +
? ? ? ? ? ?m_last_delay;
? ? ? ?m_last_delay = delay;
? ? ? ?if (delay > 0)
? ? ? ? ? ?SDL_Delay(delay);
? ?}
}

What is a proper SDL mainloop supposed to look like? ?Because this is
obviously broken. ?First of all vsync seems to hold the loop on OS X after
swap window, but it does not do anything on windows where I will run so fast
that dt is zero. ?My artificial limiting after swap window then causes
stuttering and different dts on windows and OS X causing my simulations to
run completely different on these two systems.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I would never recommend fps limiting. You should look at frame-rate
independent movementhttp://www.gamedev.net/page/resources/_/reference/programming/sweet-snippets/frame-rate-independent-movement-r1382
.
-Alex

Aren’t you essentially doing:
delay = m_last_delay;
m_last_delay += delay;
SDL_Delay(delay);

I can see how that would cause timing issues.

Jonny D

Hi,

I have a hard time creating an event loop that runs at some roughly
constant

speed and that behaves the same on OS X and windows. Currently my loop
looks something like this:

void pd::game::run()
{
SDL_Event evt;
uint32_t old_ticks = 0;

while (m_running) {
float dt = (SDL_GetTicks() - old_ticks) / 1000.0f;
old_ticks = SDL_GetTicks();
if (dt > 0.1f)
dt = 1.0f / fps_limit;

   while (SDL_PollEvent(&evt))
       handle_event(evt, dt);
   update(dt);
   render(dt);
   SDL_GL_SwapWindow(m_win);

   float diff = (1.0f / fps_limit) - dt;
   uint32 delay = (uint32)std::max(0.0f, diff * 1000.0f) +
       m_last_delay;
   m_last_delay = delay;
   if (delay > 0)
       SDL_Delay(delay);

}
}

What is a proper SDL mainloop supposed to look like? Because this is
obviously broken. First of all vsync seems to hold the loop on OS X
after

swap window, but it does not do anything on windows where I will run so
fast

that dt is zero. My artificial limiting after swap window then causes
stuttering and different dts on windows and OS X causing my simulations
toOn Sat, Apr 2, 2011 at 7:09 PM, Jonathan Dearborn wrote:
On Sat, Apr 2, 2011 at 6:58 PM, Armin Ronacher <armin.ronacher at active-4.com> wrote:

run completely different on these two systems.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

You’re code makes a lot of really hefty assumptions:

  1. SDL_GetTicks() returns the "right time"
    Why this is wrong: SDL_GetTicks() returns an OS-specific time counter which
    is updated “at some rate”, that rate need not be every millisecond. For
    example, let’s say the “real time” is 1000. You call SDL_GetTicks() and it
    returns 1000. Everything is peachy. 2ms pass, you call SDL_GetTicks(), it
    says the time is 1000. Oh look, 0 time has passed. Has it really? Later at
    time 1012, SDL_GetTicks() returns the time of 1010.

  2. SDL_Delay() is precise.
    Why this is wrong: This is generally built on OS functions that read "at
    least milliseconds have passed’. Suppose you ran at 100 FPS (10ms/frame)
    and you spent 7ms rendering. You have 3ms to wait, so you call SDL_Delay(3).
    It delays 7ms, and all of a sudden, you have 7-3 = 4 ms lost of your
    10msec/frame. Now you have 10 - 4 = 6ms to render a frame, but you can’t do
    it that quickly so your FPS drops to < 100 even though clearly your machine
    is capable of rendering > 100 FPS.

My suggestions are:

  1. Don’t update anything if you compute dt == 0, maybe even dt < 1/framerate
  2. Don’t use SDL for extremely precise timing. Sorry. :\
  3. Don’t arbitrarily limit the FPS.

Really, you can render at any speed if you use ‘dt’ as the updating factor
for your main game loop. And of course, dt == 0 implies no updating.On Sat, Apr 2, 2011 at 5:58 PM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

Hi,

I have a hard time creating an event loop that runs at some roughly
constant speed and that behaves the same on OS X and windows. Currently my
loop looks something like this:

void pd::game::run()
{
SDL_Event evt;
uint32_t old_ticks = 0;

while (m_running) {
float dt = (SDL_GetTicks() - old_ticks) / 1000.0f;
old_ticks = SDL_GetTicks();
if (dt > 0.1f)
dt = 1.0f / fps_limit;

   while (SDL_PollEvent(&evt))
       handle_event(evt, dt);
   update(dt);
   render(dt);
   SDL_GL_SwapWindow(m_win);

   float diff = (1.0f / fps_limit) - dt;
   uint32 delay = (uint32)std::max(0.0f, diff * 1000.0f) +
       m_last_delay;
   m_last_delay = delay;
   if (delay > 0)
       SDL_Delay(delay);

}
}

What is a proper SDL mainloop supposed to look like? Because this is
obviously broken. First of all vsync seems to hold the loop on OS X after
swap window, but it does not do anything on windows where I will run so fast
that dt is zero. My artificial limiting after swap window then causes
stuttering and different dts on windows and OS X causing my simulations to
run completely different on these two systems.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Does not work if you’re running faster than SDL_GetTicks with that as
your time function.

Regards,
ArminOn 4/3/11 2:09 AM, Alex Barry wrote:

I would never recommend fps limiting. You should look at frame-rate
independent movement

Toss an SDL_Delay(1) at the bottom, and you shouldn’t have any issues.
SDL_GetTicks() shouldn’t be necessary, but if you want a fast fps estimate
(instead of one that updates every seconds and displays the number of frames
that have passed), I suppose you don’t have much choice, but there are ways
around that, too, if you look for OS-specific timer functionsOn Sat, Apr 2, 2011 at 8:21 PM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

On 4/3/11 2:09 AM, Alex Barry wrote:

I would never recommend fps limiting. You should look at frame-rate
independent movement

Does not work if you’re running faster than SDL_GetTicks with that as your
time function.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I would never recommend fps limiting. You should look at frame-rate
independent movement

Does not work if you’re running faster than SDL_GetTicks with that as your
time function.

It does if you don’t perform an update until the time delta is non-zero.On Sat, Apr 2, 2011 at 7:21 PM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

On 4/3/11 2:09 AM, Alex Barry wrote:

The reason why I went with sortof fixing the timestep in the first place
was this article:

http://gafferongames.com/game-physics/fix-your-timestep/

Regards,
ArminOn 2011-04-03 2:34 AM, Patrick Baggett wrote:

It does if you don’t perform an update until the time delta is non-zero.

They use a function call “hires_time_in_seconds()”, presumably one that is
more accurate than SDL_GetTicks(). The author also doesn’t use any kind of
delay to fix up/limit the timestep between update/renderOn Sat, Apr 2, 2011 at 7:45 PM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

On 2011-04-03 2:34 AM, Patrick Baggett wrote:

It does if you don’t perform an update until the time delta is non-zero.

The reason why I went with sortof fixing the timestep in the first place
was this article:

http://gafferongames.com/game-physics/fix-your-timestep/

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Armin, I’ve done some chipmunk-physics development, and you don’t need to
fix your framerate for this.
I have some FreeBASIC code (that would easily translate to C) of how to do
this, wrapped in a Class (again, easy to translate into something
C-friendly).
You can peek at that here:
http://code.google.com/p/chipmunk-freebasic/source/browse/trunk/examples/easyChipmunk.bi
http://code.google.com/p/chipmunk-freebasic/source/browse/trunk/examples/easyChipmunk.biI’ve
used both a threaded and non-threaded approach, and most of that is
transferable to other physics libraries, too.

Take care,
-AlexOn Sat, Apr 2, 2011 at 10:23 PM, Patrick Baggett <baggett.patrick at gmail.com>wrote:

They use a function call “hires_time_in_seconds()”, presumably one that is
more accurate than SDL_GetTicks(). The author also doesn’t use any kind of
delay to fix up/limit the timestep between update/render

On Sat, Apr 2, 2011 at 7:45 PM, Armin Ronacher < armin.ronacher at active-4.com> wrote:

On 2011-04-03 2:34 AM, Patrick Baggett wrote:

It does if you don’t perform an update until the time delta is non-zero.

The reason why I went with sortof fixing the timestep in the first place
was this article:

http://gafferongames.com/game-physics/fix-your-timestep/

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I should mention you’ll be interested specifically in lines 118-130 in
easyChipmunk.bi (so you don’t have to try and figure out FreeBASIC just to
read that section).On Sat, Apr 2, 2011 at 11:25 PM, Alex Barry <@Alex_Barry> wrote:

Armin, I’ve done some chipmunk-physics development, and you don’t need to
fix your framerate for this.
I have some FreeBASIC code (that would easily translate to C) of how to do
this, wrapped in a Class (again, easy to translate into something
C-friendly).
You can peek at that here:
http://code.google.com/p/chipmunk-freebasic/source/browse/trunk/examples/easyChipmunk.bi
http://code.google.com/p/chipmunk-freebasic/source/browse/trunk/examples/easyChipmunk.biI’ve
used both a threaded and non-threaded approach, and most of that is
transferable to other physics libraries, too.

Take care,
-Alex

On Sat, Apr 2, 2011 at 10:23 PM, Patrick Baggett < baggett.patrick at gmail.com> wrote:

They use a function call “hires_time_in_seconds()”, presumably one that
is more accurate than SDL_GetTicks(). The author also doesn’t use any kind
of delay to fix up/limit the timestep between update/render

On Sat, Apr 2, 2011 at 7:45 PM, Armin Ronacher < armin.ronacher at active-4.com> wrote:

On 2011-04-03 2:34 AM, Patrick Baggett wrote:

It does if you don’t perform an update until the time delta is non-zero.

The reason why I went with sortof fixing the timestep in the first place
was this article:

http://gafferongames.com/game-physics/fix-your-timestep/

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi,On 2011-04-03 2:26 AM, Alex Barry wrote:

Toss an SDL_Delay(1) at the bottom, and you shouldn’t have any issues.
SDL_GetTicks() shouldn’t be necessary, but if you want a fast fps
estimate (instead of one that updates every seconds and displays the
number of frames that have passed), I suppose you don’t have much
choice, but there are ways around that, too, if you look for OS-specific
timer functions
Indeed. A delay of 1 fixes my problems. Not sure why, but I am
sticking with that for the moment. On windows I now also use the high
performance counter and pin my thread to one core.

Do you happen to happen to know what’s the best timing function on OS X?
Read the timestamp register myself and pin to one processor? Sounds
unreliable when speed changes during execution.

Regards,
Armin

I think you’re best plan for cross platform timing (with millisecond
accuracy) is using the <time.h> functions in C (or in C++), but
using SDL_GetTicks() should work.
SDL_Delay(1) at the bottom will keep your program from hogging 100% CPU,
plus it gives you the minimum delay possible, meaning that SDL_GetTicks()
has enough time to update to the next timeslice/interval, so you’ll have a
usable number.
What physics library are you using? I can probably help you figure out how
to integrate Gaffer’s fixed-timestep methods into it.

Take care,
-AlexOn Sun, Apr 3, 2011 at 5:47 AM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

Hi,

On 2011-04-03 2:26 AM, Alex Barry wrote:

Toss an SDL_Delay(1) at the bottom, and you shouldn’t have any issues.
SDL_GetTicks() shouldn’t be necessary, but if you want a fast fps
estimate (instead of one that updates every seconds and displays the
number of frames that have passed), I suppose you don’t have much
choice, but there are ways around that, too, if you look for OS-specific
timer functions

Indeed. A delay of 1 fixes my problems. Not sure why, but I am sticking
with that for the moment. On windows I now also use the high performance
counter and pin my thread to one core.

Do you happen to happen to know what’s the best timing function on OS X?
Read the timestamp register myself and pin to one processor? Sounds
unreliable when speed changes during execution.

Regards,
Armin

If you’re using SDL 1.3, you can use the new hires timing functions:
SDL_GetPerformanceCounter()
SDL_GetPerformanceFrequency()On Sun, Apr 3, 2011 at 2:47 AM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

Hi,

On 2011-04-03 2:26 AM, Alex Barry wrote:

Toss an SDL_Delay(1) at the bottom, and you shouldn’t have any issues.
SDL_GetTicks() shouldn’t be necessary, but if you want a fast fps
estimate (instead of one that updates every seconds and displays the
number of frames that have passed), I suppose you don’t have much
choice, but there are ways around that, too, if you look for OS-specific
timer functions

Indeed. A delay of 1 fixes my problems. Not sure why, but I am sticking
with that for the moment. On windows I now also use the high performance
counter and pin my thread to one core.

Do you happen to happen to know what’s the best timing function on OS X?
Read the timestamp register myself and pin to one processor? Sounds
unreliable when speed changes during execution.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


-Sam Lantinga, Founder and CEO, Galaxy Gameworks