W2k SDL_GetTicks() problem solved (patch included)!

My patch for SDL_GetTicks() does exactly what Mattias suggested:
Check at runtime if the high-resolution ounter is available, and if so,
use it! It is available on Win95/98/ME/NT/2000/CE, that means, virtually
on any WIN32 platform, so the patch could in fact replace the current
implementation. But I choose the conservative method of using it only
if the system says that it is really available.

It’s available for virtually every win32 platform, but not on every
processor (486?), so the fallback is needed.

Right; only a few (non-intel) 486 class CPUs have a TSC equivalent, so don’t
count on it unless the minimum spec is Pentium.

The cyrix 6x86 , which IS Pentium class, doesn’t have a TSC either,
AFAIK.

Bye,
MartinOn Tue, 3 Jul 2001 15:37:44 +0200, David Olofson wrote:

On Tuesday 03 July 2001 11:19, Florian ‘Proff’ Schulze wrote:

There is lot of security holes in Win32, but I can’t believe that
Microsoft may have made such an error with windows NT.

Anyway, your remark is good.
Needs testing…

Does anyone know where timeBeginPeriod() and timeEndPeriod() affect
timeGetTime() on an application-wide scale, or on a system-wide scale? In
other words, if your app calls timeBeginPeriod(), will it change the
timing> resolution of other applications? If it does, then any other application
could change your application’s timing resolution, which is probably not
what you want…

Matthijs

Visit my page @ www.shakeyourass.org
Listen to my music @ www.mp3.com/mothergoose
Buzz with the Bees @ www.virtualunlimited.com

It is global AFAIK, but according to the docs, timeBeginPeriod() doesn’t
force a new resolution; it only ensures that (if possible) you get at least
the resolution you request. That is, if some other application asks for 5 ms
resolution, nothing will change - your 1 ms resolution is what determines the
actual resolution (ie the rate of the IRQ driving the multimedia timers),
until you call timeEndPeriod().

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 03 July 2001 16:24, Matthijs Hollemans wrote:

“timeBeginPeriod/timeEndPeriod”, however, are just functions configuring
the behaviour of the “timeGetTime” function (if I have well understood
the MSDN docs).

Does anyone know where timeBeginPeriod() and timeEndPeriod() affect
timeGetTime() on an application-wide scale, or on a system-wide scale? In
other words, if your app calls timeBeginPeriod(), will it change the timing
resolution of other applications? If it does, then any other application
could change your application’s timing resolution, which is probably not
what you want…

Well, yes, unless the OS checks the CPU clock against the RTC at startup, the
TSC will most likely drift a great deal more than the RTC. However, you
should expect these “multimedia timer” things to drift some. In fact, you
shouldn’t even rely on the RTCs of two computers to stay in sync for extended
periods of time.

Another thing to think about is that the counter comes from the CURRENT CPU,
which means that on multiprocessor systems with slightly varying core frequencies,
the values you get from TSC will jitter slightly depending on which CPU you’re
running on. In fact, it’s not even guaranteed that the CPU’s will have the
same count, depending on how the CPU interlock startup works.

In practice it seems that even with these pitfalls it’s still more accurate
than timegettime() (and on Linux avoids a kernel context switch), but it’s
something to be aware of if you require real-time precise timing.

Oh, yes, I have Linux code for rtdsc as well. :slight_smile:

See ya,
-Sam Lantinga, Lead Programmer, Loki Software, Inc.

Hi,

Xavier Le Pasteur wrote:

But there is still a question:

  • should we provide the best timer granularity possible
    (more accurate)
    or
  • shoud we just provide a timer granularity of 1ms always.
    (more portable).

I definitely prefer the “portable” way. A timer/counter precision
of 1 ms is just fine for the current implementation of SDL_GetTicks(),
I think.

I agree with you.
We should submit this idea to the SDL community…

Sorry – just redirected the thread back to the SDL mailing list. :wink:

(For the list: It accidentally went on as a private talk…)

One thing that isn’t clear to me so far is the following:
Should I use “timeBeginPeriod”/“timeEndPeriod” around each call of
"timeGetTime", or are these functions intended to be used in the
start and cleanup stuff (and therefore used once for each running
instance of the program)? I haven’t found much details about this
in the MSDN docs.

It isn’t clear to me too :frowning:
I believe these functions are intended to be used in the start and
cleanup stuff, if not: I can’t understand how theses functions may
work…

That sounds correct.
For “timeBeginPeriod”, we then would have “SDL_StartTicks()”, but
I’m not quite sure where to place “timeEndPeriod”. But a reasonable
operation system should reset this after exiting the program
(although I won’t rely on this…).

This (again) brings up the question if “timeBeginPeriod” sets the
timer granularity for the whole system or only the calling process
(with all its implications).

Or we could add a flag to SDL_init.
If the user selects Timer Initalization, then “timeBeginPeriod” is called…

Hmmm, I think a resolution of 1 ms for SDL_GetTicks() should be the
standard behaviour, without the need of explicitely asking for it.

Best regards,
Holger–
holger.schemel at mediaways.net … ++49 +5246 80 1438

Oh, yes, I have Linux code for rtdsc as well. :slight_smile:

which we hardly need in SDL. gettimeofday() is portable and gives us the
resolution we need (1 ms)

Does it deal with multiprocessors as well? (My current version - which I’m
using for that retrace sync + “half buffering” hack - doesn’t…)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 03 July 2001 18:21, Sam Lantinga wrote:

Well, yes, unless the OS checks the CPU clock against the RTC at startup,
the TSC will most likely drift a great deal more than the RTC. However,
you should expect these “multimedia timer” things to drift some. In
fact, you shouldn’t even rely on the RTCs of two computers to stay in
sync for extended periods of time.

Another thing to think about is that the counter comes from the CURRENT
CPU, which means that on multiprocessor systems with slightly varying core
frequencies, the values you get from TSC will jitter slightly depending on
which CPU you’re running on. In fact, it’s not even guaranteed that the
CPU’s will have the same count, depending on how the CPU interlock startup
works.

In practice it seems that even with these pitfalls it’s still more accurate
than timegettime() (and on Linux avoids a kernel context switch), but it’s
something to be aware of if you require real-time precise timing.

Oh, yes, I have Linux code for rtdsc as well. :slight_smile:

Well, my retrace sync hack can’t do very well with ms resolution, but that’s
another story… (It’s non-portable and even hardware dependant for other
reasons anyway, so it might as well use it’s own timing code.)

(Yeah, I will post a demo of the retrace sync + “half-buffering” algo any
year now…! :wink:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 03 July 2001 18:57, Mattias Engdeg?rd wrote:

Oh, yes, I have Linux code for rtdsc as well. :slight_smile:

which we hardly need in SDL. gettimeofday() is portable and gives us the
resolution we need (1 ms)

Mattias Engdeg?rd wrote:

Oh, yes, I have Linux code for rtdsc as well. :slight_smile:

which we hardly need in SDL. gettimeofday() is portable and gives us the
resolution we need (1 ms)

Which is also based on rtdsc and sometime fails on my computer,
if the cpu’s are out of sync.

Bye,
Johns–
Become famous, earn no money, create music or graphics for FreeCraft

http://FreeCraft.Org - A free fantasy real-time strategy game engine
http://fgp.cjb.net - The FreeCraft Graphics Project

Which is also based on rtdsc and sometime fails on my computer,
if the cpu’s are out of sync.

which does not change the fact that gettimeofday() remains the best
thing to use under any unix right now

Under what circumstances do your cpu’s get out of sync? kernel bug?
power saving clockdown?

(If you are running Linux, try building your kernel with CONFIG_X86_TSC
unset - this should use the less precise RTC but not get out of sync)

Mattias Engdeg?rd wrote:

Which is also based on rtdsc and sometime fails on my computer,
if the cpu’s are out of sync.

which does not change the fact that gettimeofday() remains the best
thing to use under any unix right now

Yes. I only want to tell it, because I have searched many hours to find
that SDL_GetTicks had returned diffrent values depending on which CPU
it was called.

Under what circumstances do your cpu’s get out of sync? kernel bug?
power saving clockdown?

I don’t know. After rebooting everything was fine. Perhaps a over heat
clock down (iirc a P3 did this quick).

(If you are running Linux, try building your kernel with CONFIG_X86_TSC
unset - this should use the less precise RTC but not get out of sync)

I will try this in combination with the new 2.4.6.

Bye,
Johns–
Become famous, earn no money, create music or graphics for FreeCraft

http://FreeCraft.Org - A free fantasy real-time strategy game engine
http://fgp.cjb.net - The FreeCraft Graphics Project

How do I tell mingw32 that it should use D:\ as /
and not C:\ which is the default?

I need this because I do not have rights to create
/tmp into C:, and sh complains about missing /tmp.
Thanks!

– Timo Suoranta – @Timo_K_Suoranta

i don’t know the answer to your question but i know the solution to
your problem.

set envar:
TMPDIR=D:/tmp

and gcc will output it temporary file in this directory…