Using timers for simulation

I have interactive virtual world simulation work
in progress. Currently I have two threads:

  • The main thread sits in loop rendering scene and
    polling inputs

  • Before going to that loop I start timer which
    is run at regular intervals. It updates world
    simulation (objects positions and rotations are
    updated). It also processes input events so that
    if control key is being held down, that control
    is applied.

I used to run my simulation timer at 10 ms intervals.
Now the physics simulation code is more complex and
sometimes it takes longer than 10 ms to execute the
simulation.

This leads to situation where simulation thread takes
all of the cpu time, and the rendering thread is almost
stopped. This means that while the simulation runs fine,
the screen gets updated once per 10 seconds, or once per
minute, or never. The machine also is very very slow, and
some times I have had to reboot my machine, because it can
not respond to ctrl-alt-del.

Obvious solution would be to run simulation not so often,
and simulate larger steps.

The question is: How do I find out suitable timer interval?
I need something that adapts itself to whatever plaform it
is running on, so that if the machine is only capable of
doing 10 simulation updates per second, that is ok, just set
the timer to run at 100 ms intervals. OTOH, if the machine is
really fast, I wouldn’t mind having 100 simulation updates per
second, running at 10 ms intervals.

What would be recommended way to do this?

How should I measure how long my timer callback function
(simulation update step) takes?

I am not exactly sure how should I use timers. At the moment
I have single SDL_AddTimer(), and that’s it. How should I
make changes to the timer update interval?

Also, is the timer resolution still ‘only 10ms’, or can I
actually get ‘best resolution available’ and maybe ask what
is the resolution?

Thanks!

– Timo Suoranta – @Timo_K_Suoranta

I have interactive virtual world simulation work
in progress. Currently I have two threads:

  • The main thread sits in loop rendering scene and
    polling inputs

The main thread should have a delay in it somewhere to allow the OS
to task switch to your simulation thread.

  • Before going to that loop I start timer which
    is run at regular intervals. It updates world
    simulation (objects positions and rotations are
    updated). It also processes input events so that
    if control key is being held down, that control
    is applied.

You should probably run the simulation in a separate thread directly,
synchronizing with the main thread. The timer code works, but is
more suited for small tasks than performing full simulation or doing
graphics work.

I am not exactly sure how should I use timers. At the moment
I have single SDL_AddTimer(), and that’s it. How should I
make changes to the timer update interval?

The return value of the timer function is the duration of the next
interval.

Also, is the timer resolution still ‘only 10ms’, or can I
actually get ‘best resolution available’ and maybe ask what
is the resolution?

Nope, the 10ms number is a limitation of the multi-tasking slice
time of most current non-realtime operating systems.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Hi Timo,

(hope this is not coming to late…)

Among sollutions, you might consider changing the
integration method, ie from singlestep-multipass algorihms
like RK2 or RK3 to multistep-singlepass like AB methods.

You might also consider extracting fast dynamics and
integrating it with high frequency (small time step), and
rest with the slow dynamics might be integrated with bigger
time step.

You should also check if you are (unnecessary) calculating
lots of parameters of the system in every step, or even in
every pass. If the dynamics of changing those parameters is
slow you might calculate it in every 2nd, or 5th or
something step…

Concerning the max time step, the rule of thumb is that you
must identify the fastest pole, and take a step which is 10
times faster. Another is to plot step response of the
fastest pole (isolated, linear representation), and to
choose step to have (at least) 4 points in the rising part
of the curve.

You have to be careful, these are guidelines only, you have
to further analyze dynamics, in connection with the
integrating method, because different methods behave
differently when rising length of the integration step.

The other thing to check when rising time step is to check
if the method itself becomes unstable. For example (just
below instability region), if the integration step is to
long, you might see oscillations in the dynamic system which
in reality doesn’t behave that way…

Although this is theory only, without any particular
suggestions, I do hope this will help a bit :slight_smile:

May I ask you what are you simulating, so it takes more than
10 ms to calculate next step? I made flight vis-sim where I
had state vector of more than 35 variables, calculated more
than 30 functions of 2 or 3 variables (interpolation using
splines of 4th order) in every pass (3 passes per step,
RK3), step was 5ms, and it took less than 7% of CPU time in
the 5 ms window. Machine was PII300.

ciao,

Gordan

mailto:gordan.sikic at inet.hr> -----Original Message-----

From: sdl-admin at libsdl.org
[mailto:sdl-admin at libsdl.org]On Behalf Of
Timo K Suoranta
Sent: Wednesday, October 10, 2001 12:44
To: sdl at libsdl.org
Subject: [SDL] Using timers for simulation

I have interactive virtual world simulation work
in progress. Currently I have two threads:

  • The main thread sits in loop rendering scene and
    polling inputs

  • Before going to that loop I start timer which
    is run at regular intervals. It updates world
    simulation (objects positions and rotations are
    updated). It also processes input events so that
    if control key is being held down, that control
    is applied.

I used to run my simulation timer at 10 ms intervals.
Now the physics simulation code is more complex and
sometimes it takes longer than 10 ms to execute the
simulation.

This leads to situation where simulation thread takes
all of the cpu time, and the rendering thread is almost
stopped. This means that while the simulation runs fine,
the screen gets updated once per 10 seconds, or once per
minute, or never. The machine also is very very slow, and
some times I have had to reboot my machine, because it can
not respond to ctrl-alt-del.

Obvious solution would be to run simulation not so often,
and simulate larger steps.

The question is: How do I find out suitable timer
interval?
I need something that adapts itself to whatever plaform it
is running on, so that if the machine is only capable of
doing 10 simulation updates per second, that is
ok, just set
the timer to run at 100 ms intervals. OTOH, if
the machine is
really fast, I wouldn’t mind having 100
simulation updates per
second, running at 10 ms intervals.

What would be recommended way to do this?

How should I measure how long my timer callback function
(simulation update step) takes?

I am not exactly sure how should I use timers. At
the moment
I have single SDL_AddTimer(), and that’s it. How should I
make changes to the timer update interval?

Also, is the timer resolution still ‘only 10ms’, or can I
actually get ‘best resolution available’ and
maybe ask what
is the resolution?

Thanks!

– Timo Suoranta – tksuoran at cc.helsinki.fi


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

(hope this is not coming to late…)

Never too late, my project is probably taking
another aleph-null years to finish :slight_smile:

May I ask you what are you simulating, so it takes more than
10 ms to calculate next step? I made flight vis-sim where I

I was testing Open Dynamics Engine with 20 or more cubes.
No joints, I just put some cubes to make up a tower and then
dropped a few cubes nearby so they would push the tower a bit.
The user was also able to drag the cubes manually and see what
happens if you remove the second cube from the bottom. And the
user could ‘shoot’ little cubes too.

Something like 16 cubes run well with 10ms step, but when I
had 20 or more cubes, the thing first slowed down a lot, and
finally stopped. It was the rendering which stopped, the
update was still running as fast as it could, but eating all
my CPU.

Links: http://opende.sf.net and http://glelite.sf.net
(Teddy156c.zip demonstrates ode with Teddy)

– Timo Suoranta – @Timo_K_Suoranta --> From: “Gordan Sikic” <gordan.sikic at inet.hr>