here I have another patch against SDL_systimer.c regarding
SDL_GetTicks() – to be applied to SDL_systimer.c after having
applied the first patch from my mail from yesterday.
It is only one additional line in SDL_StartTicks() that does the
If QueryPerformanceFrequency() results in not having a high-resolution
counter available (which would lead to a counter precision of
SDL_GetTicks() of only 10-15 ms under Windows NT/2000), it calls
"timeBeginPeriod(1)", which sets the counter precision of "timeGetTime()"
used by SDL_GetTicks() to 1 ms, which indeed works that way under
Windows NT/2000 (all tests done on Windows 2000, BTW).
This function is therefore only used if QueryPerformanceCounter() states
that there is no high-precision counter available on that system.
Here are my investigations and conclusions that let me think that this
is the right precedence. (Again, all tests done with Windows 2000.)
When using standard SDL_GetTicks(), you get a precision of 10-15 ms.
When using QueryPerformanceCounter(), you get a precision of 1 ms.
When using timeBeginPeriod(1) once at startup, you get a precision of 1 ms.
When using timeGetTime() (like in plain SDL 1.2.1), but directly surrounded
by timeBeginPeriod(1) and timeEndPeriod(1) like this…
now = timeGetTime();
…you get a totally messed up “precision” of randomly 1-15 ms – unusable.
When starting an SDL program with the “use timeBeginPeriod(1) once at
startup” patch, all other SDL programs running at the same time (that use
standard SDL with normally 10-15 ms precision) suddenly also get an
SDL_GetTicks() precision of 1 ms. Conclusion: timeBeginPeriod(1) has a
system-wide effect. But also: timeEndPeriod(1) has this effekt, too,
lowering the precision of all SDL programs (with precision raised once at
startup) back to 10-15 ms.
Interesting performance observations:
The tests were done on a PentiumIII/800MHz/nvidiaTNT2 system and the game
Rocks’n’Diamonds, doing some soft-scrolling at fixed 50 FPS.
Additionally, the Windows 2000 task manager was used to monitor CPU usage.
When using QueryPerformanceCounter() for 1 ms accurate timing,
the CPU usage when scrolling was around 10%.
When using timeBeginPeriod(1) (once at startup) for 1 ms timing,
the CPU usage when scrolling was around 66%!
Conclusions for the best approach:
- Use 1 ms precision counter by QueryPerformanceCounter(), where the
high-precision counter is availabe.
- Use 1 ms precision counter by calling timeBeginPeriod(1) once at
startup (with apparently higher CPU load and side-effects on other
timeGetTime-using applications), where the high-precision counter
is NOT available, which is far better than plain timeGetTime() on
Windows NT/2000 systems.
Some (slightly confusing) remarks on timeBeginPeriod from the MSDN:
“Call this function immediately before using timer services, and call the timeEndPeriod function immediately after you are finished using the timer services.”
Unfortunately, they don’t specify what “using timer services” mean –
“immediately before” and “immediately after” like in “surrounding” (see
above) does obviously not work.
The only thing that would make sense this way is a special "busy-wait"
function like this (just a prototype):
timeBeginPeriod(1); start = timeGetTime(); while (timeGetTime() < start + milliseconds) do_nothing(); timeEndPeriod(1);
But then, you might need a 1 ms precise counter not only for busy-waiting.
In my opinion, the combination of the two patches (from my last mail and
this mail) seem to offer the best (== practical) solution for all WIN32
Any comments welcome!
holger.schemel at mediaways.net
-------------- next part --------------
— SDL_systimer.c.patched Mon Jul 2 22:14:29 2001
+++ SDL_systimer.c Tue Jul 3 23:03:04 2001
@@ -65,6 +65,7 @@
hires_timer_available = FALSE;
timeBeginPeriod(1); /* use 1 ms timer precision */ start = timeGetTime();