Bob Pendleton wrote:
First, move this statement out side of the loop. You are using SDL
functions after you have shut down SDL.
You are right, Bob. That was my mistake, but not fatal, since SDL_Quit ()
does not affect anything related to SDL_GetTicks and SDL_Delay.
Second, get rid of the extra
call to SDL_GetTicks() each time you call a system function you take a
risk of having the process rescheduled, not to mention adding an unknown
delay to the program.
I do not care about additional time within SDL_Delay. It satisfies me
completely for now. though i removed additional GetTicks from my code
SDL_Delay uses it internally.
If we assume that the system clock ticks every 10 milliseconds, and we
assume that your request of a 1000 millisecond delays gets converted
internally to a request for a 100 tick delay. Then the time until the
first tick can be any value t in the range 0 < t < 10. After the first
tick the delay is always 10 milliseconds. So, the time can come up short
by as much as 9.99… milliseconds but on average will be short my 5
milliseconds. Which, BTW, is right on what you sample show.
Excellent idea, and it explains everything to me, particularly why test
runs wrong for same shorter values on both very different machines.
BTW, when doing statistical sampling like this you need a larger sample
size than you generated.
After reading docs i expected to have SDL_Delay to delay for at least
specified interval. Even single test run with smaller delay proves me i’m was
wrong, so why bother with larger samples? Consider you doing syscall select with
interval set to NULL expecting it to wait indefinitely and it returns much
earlier than eternity not due to interrupted syscall or something happen on one
of descriptors?
All in all, it looks to me like you are seeing exactly what you should
be seeing.
Bob Pendleton
Not agreed. I’m expect SDL_Delay to wait for at least specified interval
and not wrapped it with
for(delay=time_left-SDL_GetTicks();delay>0;delay=time_left - SDL_GetTicks())
SDL_Delay (time_left);
but test case proved i was wrong. So every call to SDL_Delay should be wrapped?
Short investigation helped to find reason out: on linux SDL_Delay uses select' and on many other unices it uses nanosleep. man select explains little about interval, but man nanosleep tells exactly that
nanosleep delays for at least
specified interval’. So it is. When i recompiled libSDL with USE_NANOSLEEP
option it works fine.
Obviusly such syscalls can only wake process on the boundary of time slice, and
nanosleep adds one more slice to it’s interval while select uses all as is.
I’m tested this on Windows 98 and it works fine there too. So don’t you think
this is a bug?
My new testing code attached, and it passes on my machine which uses nanosleep
and still fails on machine which uses select.–
fuxx
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed…
Name: test.cc
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040511/8ed09ee0/attachment.txt