Strange SDL_Delay behavior

Hi there,
i’m new to SDL and tried few test programs.
And i can’t explain to myself this odd behavior:
when i launch this simple program

#include
#include “SDL.h”

int main(int argc, char *argv[])
{
if(SDL_Init(SDL_INIT_TIMER) == -1) {
std::cout << “Can’t initialize timer” << std::endl;
return -1;
}
Uint32 max = 1000;
Uint32 min = 1000;
for(Uint32 i=0;i<10;++i) {
Uint32 before, after, delta;
before = SDL_GetTicks ();
SDL_Delay(1000);
after = SDL_GetTicks ();
delta = after - before;
if(max < delta) max = delta;
if(min > delta) min = delta;
SDL_Quit ();
}
std::cout << “min=”<<min<<" max="<<max << std::endl;
}

it’s output looks like this:
min=991 max=1022
min=995 max=1000

I can see why max=1022 here, but i can’t see why SDL_Delay returns
earlier than before + 1000.
It is acceptable for SDL_Delay to get more time than requested, but
i can’t see any reason for it to return BEFORE interval expires. Inside
SDL_systimer.c select and nanosleep syscalls wrapped by do-while loop
catching all EINTRs.

I’ve tested this on two machines:
Cel355 with 128M running RedHat 7.3
and
AMD Duron 1200 with 512M running RedHat 9

Could someone explain that?–
fuxx

Hello Serge,

Wednesday, May 7, 2003, 6:35:59 PM, you wrote:

SSF> i’m new to SDL and tried few test programs.
SSF> And i can’t explain to myself this odd behavior:
SSF> when i launch this simple program

SSF> I can see why max=1022 here, but i can’t see why SDL_Delay returns
SSF> earlier than before + 1000.

SDL_Delay precision is platform specific, please read manual.–
Lynx,
http://foo.lynx.lv mailto:@Anatoly_R

Hi there,
i’m new to SDL and tried few test programs.
And i can’t explain to myself this odd behavior:
when i launch this simple program

#include
#include “SDL.h”

int main(int argc, char *argv[])
{
if(SDL_Init(SDL_INIT_TIMER) == -1) {
std::cout << “Can’t initialize timer” << std::endl;
return -1;
}
Uint32 max = 1000;
Uint32 min = 1000;
for(Uint32 i=0;i<10;++i) {
Uint32 before, after, delta;
before = SDL_GetTicks ();
SDL_Delay(1000);
after = SDL_GetTicks ();
delta = after - before;
if(max < delta) max = delta;
if(min > delta) min = delta;
SDL_Quit ();

First, move this statement out side of the loop. You are using SDL
functions after you have shut down SDL. Second, get rid of the extra
call to SDL_GetTicks() each time you call a system function you take a
risk of having the process rescheduled, not to mention adding an unknown
delay to the program.

If we assume that the system clock ticks every 10 milliseconds, and we
assume that your request of a 1000 millisecond delays gets converted
internally to a request for a 100 tick delay. Then the time until the
first tick can be any value t in the range 0 < t < 10. After the first
tick the delay is always 10 milliseconds. So, the time can come up short
by as much as 9.99… milliseconds but on average will be short my 5
milliseconds. Which, BTW, is right on what you sample show.

}
std::cout << “min=”<<min<<" max="<<max << std::endl;
}

it’s output looks like this:
min=991 max=1022
min=995 max=1000

BTW, when doing statistical sampling like this you need a larger sample
size than you generated.

All in all, it looks to me like you are seeing exactly what you should
be seeing.

	Bob PendletonOn Wed, 2003-05-07 at 10:35, Serge S. Fukanchik wrote:

I can see why max=1022 here, but i can’t see why SDL_Delay returns
earlier than before + 1000.
It is acceptable for SDL_Delay to get more time than requested, but
i can’t see any reason for it to return BEFORE interval expires. Inside
SDL_systimer.c select and nanosleep syscalls wrapped by do-while loop
catching all EINTRs.

I’ve tested this on two machines:
Cel355 with 128M running RedHat 7.3
and
AMD Duron 1200 with 512M running RedHat 9

Could someone explain that?


fuxx


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

  • Bob Pendleton: independent writer +
  • and programmer. +
  • email: Bob at Pendleton.com +
    ±----------------------------------+

Bob Pendleton wrote:

First, move this statement out side of the loop. You are using SDL
functions after you have shut down SDL.
You are right, Bob. That was my mistake, but not fatal, since SDL_Quit ()
does not affect anything related to SDL_GetTicks and SDL_Delay.

Second, get rid of the extra

call to SDL_GetTicks() each time you call a system function you take a
risk of having the process rescheduled, not to mention adding an unknown
delay to the program.
I do not care about additional time within SDL_Delay. It satisfies me
completely for now. though i removed additional GetTicks from my code
SDL_Delay uses it internally.

If we assume that the system clock ticks every 10 milliseconds, and we
assume that your request of a 1000 millisecond delays gets converted
internally to a request for a 100 tick delay. Then the time until the
first tick can be any value t in the range 0 < t < 10. After the first
tick the delay is always 10 milliseconds. So, the time can come up short
by as much as 9.99… milliseconds but on average will be short my 5
milliseconds. Which, BTW, is right on what you sample show.
Excellent idea, and it explains everything to me, particularly why test
runs wrong for same shorter values on both very different machines.

BTW, when doing statistical sampling like this you need a larger sample
size than you generated.
After reading docs i expected to have SDL_Delay to delay for at least
specified interval. Even single test run with smaller delay proves me i’m was
wrong, so why bother with larger samples? Consider you doing syscall select with
interval set to NULL expecting it to wait indefinitely and it returns much
earlier than eternity not due to interrupted syscall or something happen on one
of descriptors?

All in all, it looks to me like you are seeing exactly what you should
be seeing.

  Bob Pendleton

Not agreed. I’m expect SDL_Delay to wait for at least specified interval
and not wrapped it with

for(delay=time_left-SDL_GetTicks();delay>0;delay=time_left - SDL_GetTicks())
SDL_Delay (time_left);

but test case proved i was wrong. So every call to SDL_Delay should be wrapped?

Short investigation helped to find reason out: on linux SDL_Delay uses select' and on many other unices it uses nanosleep. man select explains little about interval, but man nanosleep tells exactly thatnanosleep delays for at least
specified interval’. So it is. When i recompiled libSDL with USE_NANOSLEEP
option it works fine.
Obviusly such syscalls can only wake process on the boundary of time slice, and
nanosleep adds one more slice to it’s interval while select uses all as is.

I’m tested this on Windows 98 and it works fine there too. So don’t you think
this is a bug?

My new testing code attached, and it passes on my machine which uses nanosleep
and still fails on machine which uses select.–
fuxx

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed…
Name: test.cc
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040511/8ed09ee0/attachment.txt