Misuse of SDL_Delay?

Hello,

I have a very basic question (sorry for the inconvenience).
According to the documentation, time intervals managed by SDL_Delay are
multiples of some time
unit, which on my Linux distribution (SuSe 8.1) is 10ms. With a baby-program
I have checked (in
agreement with http://www.ifm.liu.se/~ulfek/projects/timers/ mentioned in a
recent post) that in fact
the argument of SDL_Delay is rounded to the next multiple of 10ms.

But in my short life as SDL fan (I’m a newbie) I have seen several times
lines like SDL_Delay(1),
SDL_Delay(5), etc. What for?
I’ve also seen SDL_Delay(20-elapsed_time) in order to synchronize blitting
and 50Hz display scan
(e.g.in the archives
http://www.libsdl.org/pipermail/sdl/2003-September/056520.html ), but it
seems
to me that it could worsen badly the flickering because with elapsed_time=19
it’s equivalent to
SDL_Delay(10).
What point am I missing?

Thank you,

Nieto Abril._________________________________________________________________
Tired of spam? Get advanced junk mail protection with MSN 8.
http://join.msn.com/?page=features/junkmail

But in my short life as SDL fan (I’m a newbie) I have seen several times
lines like SDL_Delay(1),
SDL_Delay(5), etc. What for?
I’ve also seen SDL_Delay(20-elapsed_time) in order to synchronize blitting
and 50Hz display scan
(e.g.in the archives
http://www.libsdl.org/pipermail/sdl/2003-September/056520.html ), but it
seems
to me that it could worsen badly the flickering because with elapsed_time=19
it’s equivalent to
SDL_Delay(10).
What point am I missing?

A) Some people don’t realize that SDL_Delay isn’t precise and just never
check. That is, they’re making false assumptions.

B) Windows has a smaller timer granularity, so SDL_Delay(5) gives you a
5 ms delay on that platform. Lots and lots and lots of people don’t
care about linux.

C) The linux 2.6+ kernel has a smaller timer granularity too… which
means that they might be writing for the future.

C pt2) Games often want to give up control, but as little as possible.
Since the whole SDL_Delay(0) trick will (I’ve heard) piss off the
scheduler in linux 2.6, the next best thing is to delay 1, take the 10ms
hit for now and recompile when 2.6 is more common.On Tue, 2004-01-13 at 10:59, Abril Nieto wrote:

Thank you,

Nieto Abril.


Tired of spam? Get advanced junk mail protection with MSN 8.
http://join.msn.com/?page=features/junkmail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hello,

I have a very basic question (sorry for the inconvenience).
According to the documentation, time intervals managed by SDL_Delay are
multiples of some time
unit, which on my Linux distribution (SuSe 8.1) is 10ms. With a baby-program
I have checked (in
agreement with http://www.ifm.liu.se/~ulfek/projects/timers/ mentioned in a
recent post) that in fact
the argument of SDL_Delay is rounded to the next multiple of 10ms.

But in my short life as SDL fan (I’m a newbie) I have seen several times
lines like SDL_Delay(1),
SDL_Delay(5), etc. What for?
I’ve also seen SDL_Delay(20-elapsed_time) in order to synchronize blitting
and 50Hz display scan
(e.g.in the archives
http://www.libsdl.org/pipermail/sdl/2003-September/056520.html ), but it
seems
to me that it could worsen badly the flickering because with elapsed_time=19
it’s equivalent to
SDL_Delay(10).
What point am I missing?

In many cases people simply don’t understand what you are coming to
understand. In some cases they actually know what they are doing.

Systems seem to come in two classes, those with a clock tick of 10
milliseconds and those with a clock time of 1 millisecond or less. On
the second class of computers it makes perfect sense to use delays of a
small number of milliseconds.

On the first class of computers, those with 10 millisecond clocks a call
to SDL_Delay() with an argument in the range of 0-10 says that you
should just wait until the next clock tick. That means you will wait for
an average of 5 milliseconds. You have an equal chance of waiting for a
tiny amount greater than zero and of waiting a tiny amount less than 10
milliseconds. So, the average wait is 5 milliseconds. The interesting
thing is that a call to SDL_Delay(5) gives you and average delay of
exactly what you asked for.

On a computer with a 10 millisecond clock if you ask for a delay of 11
milliseconds you will wait for 2 clock ticks to pass. Which means that
you will wait for an average of 5 milliseconds for the first clock tick
and 10 milliseconds for the second tick for an average delay of 15
milliseconds.

Using code like SDL_Delay(50 - (how long it took to get here)) actually
works pretty well. It doesn’t add to the flicker. I prefer to add a
delay of 5 milliseconds if the program is trying to run faster than 100
frames/second. Doing so can dramatically reduce the load on the computer
and get you smoother animation.

	Bob PendletonOn Tue, 2004-01-13 at 09:59, Abril Nieto wrote:

Thank you,

Nieto Abril.


Tired of spam? Get advanced junk mail protection with MSN 8.
http://join.msn.com/?page=features/junkmail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±--------------------------------------+

Does this mean that SDL_GetTicks also gives a number that should be
multiplied by 5 or 10 ms on some systems?

Daniel Uppstr?m

Bob Pendleton wrote:>On Tue, 2004-01-13 at 09:59, Abril Nieto wrote:

Hello,

I have a very basic question (sorry for the inconvenience).
According to the documentation, time intervals managed by SDL_Delay are
multiples of some time
unit, which on my Linux distribution (SuSe 8.1) is 10ms. With a baby-program
I have checked (in
agreement with http://www.ifm.liu.se/~ulfek/projects/timers/ mentioned in a
recent post) that in fact
the argument of SDL_Delay is rounded to the next multiple of 10ms.

But in my short life as SDL fan (I’m a newbie) I have seen several times
lines like SDL_Delay(1),
SDL_Delay(5), etc. What for?
I’ve also seen SDL_Delay(20-elapsed_time) in order to synchronize blitting
and 50Hz display scan
(e.g.in the archives
http://www.libsdl.org/pipermail/sdl/2003-September/056520.html ), but it
seems
to me that it could worsen badly the flickering because with elapsed_time=19
it’s equivalent to
SDL_Delay(10).
What point am I missing?

In many cases people simply don’t understand what you are coming to
understand. In some cases they actually know what they are doing.

Systems seem to come in two classes, those with a clock tick of 10
milliseconds and those with a clock time of 1 millisecond or less. On
the second class of computers it makes perfect sense to use delays of a
small number of milliseconds.

On the first class of computers, those with 10 millisecond clocks a call
to SDL_Delay() with an argument in the range of 0-10 says that you
should just wait until the next clock tick. That means you will wait for
an average of 5 milliseconds. You have an equal chance of waiting for a
tiny amount greater than zero and of waiting a tiny amount less than 10
milliseconds. So, the average wait is 5 milliseconds. The interesting
thing is that a call to SDL_Delay(5) gives you and average delay of
exactly what you asked for.

On a computer with a 10 millisecond clock if you ask for a delay of 11
milliseconds you will wait for 2 clock ticks to pass. Which means that
you will wait for an average of 5 milliseconds for the first clock tick
and 10 milliseconds for the second tick for an average delay of 15
milliseconds.

Using code like SDL_Delay(50 - (how long it took to get here)) actually
works pretty well. It doesn’t add to the flicker. I prefer to add a
delay of 5 milliseconds if the program is trying to run faster than 100
frames/second. Doing so can dramatically reduce the load on the computer
and get you smoother animation.

  Bob Pendleton

Thank you,

Nieto Abril.


Tired of spam? Get advanced junk mail protection with MSN 8.
http://join.msn.com/?page=features/junkmail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Does this mean that SDL_GetTicks also gives a number that should be
multiplied by 5 or 10 ms on some systems?

We’re not talking about mutiplication of time units, we’re talking about
rounding them.

Chris Seaton

I think what he means, is SDL_GetTicks only accurate to aprox 10 ms as well?> ----- Original Message -----

From: chris@chrisseaton.com (Chris Seaton)
To:
Sent: Wednesday, January 14, 2004 11:35 AM
Subject: Re: [SDL] Misuse of SDL_Delay?

Does this mean that SDL_GetTicks also gives a number that should be
multiplied by 5 or 10 ms on some systems?

We’re not talking about mutiplication of time units, we’re talking about
rounding them.

Chris Seaton


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Alan Wolfe wrote:

I think what he means, is SDL_GetTicks only accurate to aprox 10 ms as well?

It is platform-depent too, but you get a better precision than 10ms.
Most systems will do 1ms precision. For example on x86 (running
windows, linux, bsd…) this is very accurate (because it uses the x86
cpu clock count). Most unices also have good timer precision. So I think
you can pretty safely assume good precision.

The issue with SDL_Delay is really that it puts your program to sleep
and must be brought back by the scheduler, which usually runs another
program in between for some ms. In this case the scheduler isn’t
involved so there’s no such issue.

Stephane

With the latest 2.6.1 kernel, I usually get an average of 1-2 ms wait
times for SDL_Delay(1). This is a huge improvement over 2.4.x, where
an SDL_Delay(1) could result in (up to) a 15ms wait.

SteveOn January 14, 2004 05:07 pm, Stephane Marchesin wrote:

The issue with SDL_Delay is really that it puts your program to sleep
and must be brought back by the scheduler, which usually runs another
program in between for some ms. In this case the scheduler isn’t
involved so there’s no such issue.

Does this mean that SDL_GetTicks also gives a number that should be
multiplied by 5 or 10 ms on some systems?

No. It returns a value in milliseconds. If a conversion is needed to go
from system clock ticks to milliseconds, SDL does it for you.

	Bob PendletonOn Wed, 2004-01-14 at 13:17, Daniel Uppstr?m wrote:

Daniel Uppstr?m

Bob Pendleton wrote:

On Tue, 2004-01-13 at 09:59, Abril Nieto wrote:

Hello,

I have a very basic question (sorry for the inconvenience).
According to the documentation, time intervals managed by SDL_Delay are
multiples of some time
unit, which on my Linux distribution (SuSe 8.1) is 10ms. With a baby-program
I have checked (in
agreement with http://www.ifm.liu.se/~ulfek/projects/timers/ mentioned in a
recent post) that in fact
the argument of SDL_Delay is rounded to the next multiple of 10ms.

But in my short life as SDL fan (I’m a newbie) I have seen several times
lines like SDL_Delay(1),
SDL_Delay(5), etc. What for?
I’ve also seen SDL_Delay(20-elapsed_time) in order to synchronize blitting
and 50Hz display scan
(e.g.in the archives
http://www.libsdl.org/pipermail/sdl/2003-September/056520.html ), but it
seems
to me that it could worsen badly the flickering because with elapsed_time=19
it’s equivalent to
SDL_Delay(10).
What point am I missing?

In many cases people simply don’t understand what you are coming to
understand. In some cases they actually know what they are doing.

Systems seem to come in two classes, those with a clock tick of 10
milliseconds and those with a clock time of 1 millisecond or less. On
the second class of computers it makes perfect sense to use delays of a
small number of milliseconds.

On the first class of computers, those with 10 millisecond clocks a call
to SDL_Delay() with an argument in the range of 0-10 says that you
should just wait until the next clock tick. That means you will wait for
an average of 5 milliseconds. You have an equal chance of waiting for a
tiny amount greater than zero and of waiting a tiny amount less than 10
milliseconds. So, the average wait is 5 milliseconds. The interesting
thing is that a call to SDL_Delay(5) gives you and average delay of
exactly what you asked for.

On a computer with a 10 millisecond clock if you ask for a delay of 11
milliseconds you will wait for 2 clock ticks to pass. Which means that
you will wait for an average of 5 milliseconds for the first clock tick
and 10 milliseconds for the second tick for an average delay of 15
milliseconds.

Using code like SDL_Delay(50 - (how long it took to get here)) actually
works pretty well. It doesn’t add to the flicker. I prefer to add a
delay of 5 milliseconds if the program is trying to run faster than 100
frames/second. Doing so can dramatically reduce the load on the computer
and get you smoother animation.

Bob Pendleton

Thank you,

Nieto Abril.


Tired of spam? Get advanced junk mail protection with MSN 8.
http://join.msn.com/?page=features/junkmail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±--------------------------------------+

The time granularity of Linux is controlled by the HZ constant in the
kernel.

The previous series of linux kernel used HZ=100 for i386, implying a
granularity of 10 msec.

The new 2.6 series sets HZ to 1000, yielding a 1 msec granularity.

Christophe Pallier