Higher precision SDL_GetTicks()?

Hi folks,

I do research in cognitive science where phenomena of interest are on
the order of a few milliseconds, so I’m wondering if there is any way
for me to modify SDL 1.3 so that SDL_GetTicks() is more precise than
its current millisecond resolution.

Cheers,

Mike–
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar: http://goo.gl/BYH99

~ Certainty is folly… I think. ~

Check this out

http://forums.libsdl.org/viewtopic.php?t=7110On Tue, Feb 7, 2012 at 3:53 PM, Mike Lawrence <Mike.Lawrence at dal.ca> wrote:

Hi folks,

I do research in cognitive science where phenomena of interest are on
the order of a few milliseconds, so I’m wondering if there is any way
for me to modify SDL 1.3 so that SDL_GetTicks() is more precise than
its current millisecond resolution.

Cheers,

Mike


Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar: http://goo.gl/BYH99

~ Certainty is folly… I think. ~


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Forest Hale wrote:

  1. QueryPerformanceFrequency might change (rapidly) due to dynamic cpu
    clocking (laptops and desktops both do this to save power).
  2. QueryPerformanceCounter and QueryPerformanceFrequency might change due
    to independent core timers (from the earliest AMD Athlon 64 X2 processors
    this issue appeared, and affected at least Mozilla Thunderbird network
    timeouts when it used these functions).

But then MSDN http://msdn.microsoft.com/en-us/library/ms644905.aspx has:

Retrieves the frequency of the high-resolution performance counter, if one
exists. The frequency cannot change while the system is running.

On a multiprocessor computer, it should not matter which processor is
called. However, you can get different results on different processors due
to bugs in the basic input/output system (BIOS) or the hardware abstraction
layer (HAL).—

I’ve heard about QPC being odd in MP systems, but I think at least for #1,
Forest is thinking of RDTSC, which does change resolution depending on CPU
model / clockspeed. Modern ones have a constant counter independent of the
CPU core frequency, usually based on something like bus frequency.

My advice – if you have control over the target platform, simply use what
works. If you’re worried about power management – disable it for your
tests. The big caveat with SDL_GetPerformanceCounter/Frequency() (on
Windows) is that multiprocessor systems can do it wrong, so if you set the
affinity to CPU0, then you shouldn’t run into any problems.

Patrick

On Tue, Feb 7, 2012 at 8:07 AM, Dimitris Zenios <dimitris.zenios at gmail.com>wrote:

Check this out

http://forums.libsdl.org/viewtopic.php?t=7110

On Tue, Feb 7, 2012 at 3:53 PM, Mike Lawrence <Mike.Lawrence at dal.ca> wrote:

Hi folks,

I do research in cognitive science where phenomena of interest are on
the order of a few milliseconds, so I’m wondering if there is any way
for me to modify SDL 1.3 so that SDL_GetTicks() is more precise than
its current millisecond resolution.

Cheers,

Mike


Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://goo.gl/BYH99

~ Certainty is folly… I think. ~


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I definitely have control over the target platform. I run all my
experiments either in linux (ubuntu 11.10) or Mac OS 10.7. So, given
that the previous discussions of issues seem to be related to Windows,
I should be OK, yes? Or should I also look into disabling any power
management set up in those OSes?

Thanks so much for all your help

MikeOn Tue Feb 7 2012 at 07:29:56am Patrick Baggett <baggett.patrick at gmail.com> wrote:

My advice – if you have control over the target platform, simply use what
works. If you’re worried about power management – disable it for your
tests. The big caveat with SDL_GetPerformanceCounter/Frequency() (on
Windows) is that multiprocessor systems can do it wrong, so if you set the
affinity to CPU0, then you shouldn’t run into any problems.


Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar: http://goo.gl/BYH99

~ Certainty is folly… I think. ~

Well, Wiki probably isn’t the best source for this, but:

In particular, (since it looks like MacOS implies Intel CPUs about 99% of
the time)

For Pentium 4 processors, Intel Xeon processors (family [0FH], models [03H
and higher]); for Intel Core Solo and Intel Core Duo processors (family
[06H], model [0EH]); for the Intel Xeon processor 5100 series and Intel
Core 2 Duo processors (family [06H], model [0FH]); for Intel Core 2 and
Intel Xeon processors (family [06H], display_model [17H]); for Intel Atom
processors (family [06H], display_model [1CH]): the time-stamp counter
increments at a constant rate. That rate may be set by the maximum
core-clock to bus-clock ratio of the processor or may be set by the maximum
resolved frequency at which the processor is booted. The maximum resolved
frequency may differ from the maximum qualified frequency of the processor.

This is probably true of Intel i3/5/7 as well, so you should be reasonably
safe.

If you look at http://hg.libsdl.org/SDL/rev/6bd701987ba9 on line 5.25, you
can see that these are implemented in a reasonable way for UNIX-like OSes,
so you should be able to use the above-mentioned SDL functions portably.
Unless you’re running on older AMD CPUs (K8, not K10), then I wouldn’t
worry too much. I don’t think you’ll need to mess with the power management
unless you’re running on an older machine.

I would also suggest turning off the automatic graphics switching if
you’re running experiments on your laptop. I didn’t benchmark
extensively, but anecdotally things worked better with that off (i.e.,
forcing graphics card use), even for the small amount of texturing I
was doing, especially when the laptop is driving a projector.

hope that helps!

JohnOn Tue, Feb 7, 2012 at 4:14 PM, Patrick Baggett <baggett.patrick at gmail.com> wrote:

Well, Wiki probably isn’t the best source for this, but:

http://en.wikipedia.org/wiki/Time_Stamp_Counter

In particular, (since it looks like MacOS implies Intel CPUs about 99% of
the time)

For Pentium 4 processors, Intel Xeon processors (family [0FH], models [03H
and higher]); for Intel Core Solo and Intel Core Duo processors (family
[06H], model [0EH]); for the Intel Xeon processor 5100 series and Intel Core
2 Duo processors (family [06H], model [0FH]); for Intel Core 2 and Intel
Xeon processors (family [06H], display_model [17H]); for Intel Atom
processors (family [06H], display_model [1CH]): the time-stamp counter
increments at a constant rate. That rate may be set by the maximum
core-clock to bus-clock ratio of the processor or may be set by the maximum
resolved frequency at which the processor is booted. The maximum resolved
frequency may differ from the maximum qualified frequency of the processor.

This is probably true of Intel i3/5/7 as well, so you should be reasonably
safe.

If you look at?http://hg.libsdl.org/SDL/rev/6bd701987ba9?on line 5.25, you
can see that these are implemented in a reasonable way for UNIX-like OSes,
so you should be able to use the above-mentioned SDL functions portably.
Unless you’re running on older AMD CPUs (K8, not K10), then I wouldn’t worry
too much. I don’t think you’ll need to mess with the power management unless
you’re running on an older machine.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

For Linux I just use:
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return (double) ts.tv_sec + ts.tv_nsec / 1000000000.0;On 02/07/2012 01:14 PM, Patrick Baggett wrote:

Well, Wiki probably isn’t the best source for this, but:

http://en.wikipedia.org/wiki/Time_Stamp_Counter

In particular, (since it looks like MacOS implies Intel CPUs about 99% of the time)

For Pentium 4 processors, Intel Xeon processors (family [0FH], models [03H and higher]); for Intel Core Solo and Intel Core Duo processors (family [06H], model [0EH]); for the Intel Xeon processor
5100 series and Intel Core 2 Duo processors (family [06H], model [0FH]); for Intel Core 2 and Intel Xeon processors (family [06H], display_model [17H]); for Intel Atom processors (family [06H],
display_model [1CH]): the time-stamp counter increments at a constant rate. That rate may be set by the maximum core-clock to bus-clock ratio of the processor or may be set by the maximum resolved
frequency at which the processor is booted. The maximum resolved frequency may differ from the maximum qualified frequency of the processor.

This is probably true of Intel i3/5/7 as well, so you should be reasonably safe.

If you look at http://hg.libsdl.org/SDL/rev/6bd701987ba9 on line 5.25, you can see that these are implemented in a reasonable way for UNIX-like OSes, so you should be able to use the above-mentioned
SDL functions portably. Unless you’re running on older AMD CPUs (K8, not K10), then I wouldn’t worry too much. I don’t think you’ll need to mess with the power management unless you’re running on an
older machine.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


LordHavoc
Author of DarkPlaces Quake1 engine - http://icculus.org/twilight/darkplaces
Co-designer of Nexuiz - http://alientrap.org/nexuiz
"War does not prove who is right, it proves who is left." - Unknown
"Any sufficiently advanced technology is indistinguishable from a rigged demo." - James Klass
"A game is a series of interesting choices." - Sid Meier

For Linux I just use:
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return (double) ts.tv_sec + ts.tv_nsec / 1000000000.0;

Yep – that’s what’s in SDL ;)On Tue, Feb 7, 2012 at 11:10 PM, Forest Hale wrote:

On 02/07/2012 01:14 PM, Patrick Baggett wrote:

Well, Wiki probably isn’t the best source for this, but:

http://en.wikipedia.org/wiki/Time_Stamp_Counter

In particular, (since it looks like MacOS implies Intel CPUs about 99%
of the time)

For Pentium 4 processors, Intel Xeon processors (family [0FH], models
[03H and higher]); for Intel Core Solo and Intel Core Duo processors
(family [06H], model [0EH]); for the Intel Xeon processor
5100 series and Intel Core 2 Duo processors (family [06H], model [0FH]);
for Intel Core 2 and Intel Xeon processors (family [06H], display_model
[17H]); for Intel Atom processors (family [06H],
display_model [1CH]): the time-stamp counter increments at a constant
rate. That rate may be set by the maximum core-clock to bus-clock ratio of
the processor or may be set by the maximum resolved
frequency at which the processor is booted. The maximum resolved
frequency may differ from the maximum qualified frequency of the processor.

This is probably true of Intel i3/5/7 as well, so you should be
reasonably safe.

If you look at http://hg.libsdl.org/SDL/rev/6bd701987ba9 on line 5.25,
you can see that these are implemented in a reasonable way for UNIX-like
OSes, so you should be able to use the above-mentioned
SDL functions portably. Unless you’re running on older AMD CPUs (K8, not
K10), then I wouldn’t worry too much. I don’t think you’ll need to mess
with the power management unless you’re running on an
older machine.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


LordHavoc
Author of DarkPlaces Quake1 engine -
http://icculus.org/twilight/darkplaces
Co-designer of Nexuiz - http://alientrap.org/nexuiz
"War does not prove who is right, it proves who is left." - Unknown
"Any sufficiently advanced technology is indistinguishable from a rigged
demo." - James Klass
"A game is a series of interesting choices." - Sid Meier


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

It does but the SDL_GetTicks implementation only returns millisecond
accuracy. There is however a more accurate timer API added last year:

4279 6bd701987ba9 2011-03-25 14:45 -0700 slouken
Added high resolution timing API: SDL_GetPerformanceCounter(),
SDL_GetPerformanceFrequency()On 08/02/2012 15:27, Patrick Baggett wrote:

For Linux I just use:
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC, &ts);
    return (double) ts.tv_sec + ts.tv_nsec / 1000000000.0;

Yep – that’s what’s in SDL :wink:

For Linux I just use:
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return (double) ts.tv_sec + ts.tv_nsec / 1000000000.0;

Yep – that’s what’s in SDL :wink:

It does but the SDL_GetTicks implementation only returns millisecond
accuracy. There is however a more accurate timer API added last year:

4279 6bd701987ba9 2011-03-25 14:45 -0700 slouken
Added high resolution timing API: SDL_GetPerformanceCounter(),
SDL_GetPerformanceFrequency()

Yep, that’s what I was referring to when I wrote "If you look at
http://hg.libsdl.org/SDL/rev/6bd701987ba9
* on line 5.25…*"On Wed, Feb 8, 2012 at 9:38 AM, Tim Angus wrote:

On 08/02/2012 15:27, Patrick Baggett wrote:

_____________**
SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/**listinfo.cgi/sdl-libsdl.orghttp://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org