High precision timers

Hi,
It’s my first message to the list, pleased to meet everyone. I’m Rewpparo,
hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API (multi
screen, multi windows),
and decided to use SDL again. I was until recently working on my own
portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface. Millisecond
granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those, maybe
less.
I saw in the roadmap that higher precision is in preparation, but nothing
seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability) with
windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to contribute
that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits integers
that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits integers
can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build system,
as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to tell it
what to compile.

Rewpparo

This topic was discussed before:
http://forums.libsdl.org/viewtopic.php?t=5333&sid=675478da48259e9fcf202a8ea79e1506

You may be able to use the *NIX timing function gettimeofday, with a
/potential/ nanosecond accuracy. On Linux/Intel one can also use some
assembly to read the processor Time Stamp Counter. See post at end of
this thread:

On Win32 you can use the following two API calls from Kernel32.dll
(Winbase.h. i.e. include Windows.h) to get the nanosecond-resolution
performance counters of the CPU. See also
http://support.microsoft.com/kb/172338

// The counter parameter to store the retrieved
value in.
// A flag indicating if the counter was set in the
parameter.

BOOL WINAPI QueryPerformanceCounter(
__out LARGE_INTEGER *lpPerformanceCount
);

/// The frequency parameter to store the
retrieved value in.
/// A flag indicating if the frequency was set in the
parameter.

BOOL WINAPI QueryPerformanceFrequency(
__out LARGE_INTEGER *lpFrequency
);

One would calculate the number of seconds since the system was started
as follows.
double Seconds()
{
return (double)Count/(double)Frequency;
}

A good summary of these approaches with sample code is here:
http://www.songho.ca/misc/timer/timer.html

Hope that helps to get you started.
–AndreasOn 11/29/10 3:41 AM, Jean-Fran?ois S?v?re wrote:

Hi,
It’s my first message to the list, pleased to meet everyone. I’m
Rewpparo, hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API
(multi screen, multi windows),
and decided to use SDL again. I was until recently working on my own
portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface.
Millisecond granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those,
maybe less.
I saw in the roadmap that higher precision is in preparation, but
nothing seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability)
with windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to
contribute that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits
integers that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits
integers can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build
system, as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to
tell it what to compile.

Rewpparo


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Thanks for the link. I’ve already been working on portable high
resolution timers at least on windows and Linux.
That’s why I’m offering my services. And also because I need it :slight_smile:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime?can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

My questions would be more oriented towards the kind of API that would
be adequate, especially time encoding.
Several options are available and used on different platforms:

  • A 64 bits integer storing nanoseconds may be the best choice. It
    loops after 586 years, relatively easy to use, even though
    some precautions need to be taken if you don’t want to loose
    precision. However it may not be available on some older platforms,
    making it unsuitable for SDL. It’s not defined in C89, only C99+, and
    some compilers may choose not to implement it.
    Some research would need to be done before using it.

  • A struct containing two 32bits integers, one storing nanoseconds,
    one storing seconds. It loops after 136 years.
    Using this would complicate the interface, as it would require
    functions to compare time values, make some arithmetic
    operations on them etc… But it would be widely supported and have
    consistent behavior.

  • A 32 bits integer for the number of ticks, and another for the
    number of ticks in a second. This approach has two problems :
    The first is the lack of visibility on the bouds. The time required
    for the time to loop is implementation dependent, and cannot
    be advertised clearly in the manual. But also, that time would
    decrease with higher performance timers. Microsecond granularity
    would loop after an hour, and nanosecond granularity would loop after
    only 4 seconds.

I’m all for the 64 bits integer, but it would require one of two
strategies for SDL :
The first would be to say that SDL 1.3 is only for platforms and
compilers that support a 64bits integer implementation, excluding
all older platforms/compilers from future SDL upgrade.
The second strategy would be to hide the high precision interface on
older platforms/compilers, resulting in compile errors when
compiling code that uses the high performance API. It needs to be
advertised in the documentation that using this API will prevent
any future porting to a list of platforms/compilers.

If none of these strategies are possible, then I’d suggest to use the
struct with two integers. This will result in a complicated API, but
will work nicely.

As a newcomer, I can’t make that kind of choice, so I’ll be waiting
for a decision before I start working on this. Who is supposed to make
that kind of decisions ? Sam ?

Rewpparo

2010/11/29 Andreas Schiffler >

This topic was discussed before:
http://forums.libsdl.org/viewtopic.php?t=5333&sid=675478da48259e9fcf202a8ea79e1506

You may be able to use the *NIX timing function gettimeofday, with a potential nanosecond accuracy. On Linux/Intel one can also use some assembly to read the processor Time Stamp Counter. See post at end of this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=106276

On Win32 you can use the following two API calls from Kernel32.dll (Winbase.h. i.e. include Windows.h) to get the nanosecond-resolution performance counters of the CPU. See also http://support.microsoft.com/kb/172338

// The counter parameter to store the retrieved value in.
// A flag indicating if the counter was set in the parameter.

BOOL WINAPI QueryPerformanceCounter(
__out??LARGE_INTEGER *lpPerformanceCount
);

/// The frequency parameter to store the retrieved value in.
/// A flag indicating if the frequency was set in the parameter.

BOOL WINAPI QueryPerformanceFrequency(
__out??LARGE_INTEGER *lpFrequency
);

One would calculate the number of seconds since the system was started as follows.
double Seconds()
{
? return (double)Count/(double)Frequency;
}

A good summary of these approaches with sample code is here:
http://www.songho.ca/misc/timer/timer.html

Hope that helps to get you started.
–Andreas

On 11/29/10 3:41 AM, Jean-Fran?ois S?v?re wrote:

Hi,
It’s my first message to the list, pleased to meet everyone. I’m Rewpparo, hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API (multi screen, multi windows),
and decided to use SDL again. I was until recently working on my own portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface. Millisecond granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those, maybe less.
I saw in the roadmap that higher precision is in preparation, but nothing seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability) with windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to contribute that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits integers that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits integers can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build system, as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to tell it what to compile.

Rewpparo


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

How about a new API that returns a “double” … in terms of resolution
you’d get the 52bits of the fractional part and the type should be
available on every platform. The original API can be maintained and its
value is easily converted from this double (i.e. simply returning a
32bit integer with a lower resolution).

I’d also be for an API call that returns the estimated resolution of the
timer. It could be measured (and the value cached) during the Init phase.

–AndreasOn 11/29/10 8:53 AM, Jean-Fran?ois S?v?re wrote:

Thanks for the link. I’ve already been working on portable high
resolution timers at least on windows and Linux.
That’s why I’m offering my services. And also because I need it :slight_smile:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

My questions would be more oriented towards the kind of API that would
be adequate, especially time encoding.
Several options are available and used on different platforms:

  • A 64 bits integer storing nanoseconds may be the best choice. It
    loops after 586 years, relatively easy to use, even though
    some precautions need to be taken if you don’t want to loose
    precision. However it may not be available on some older platforms,
    making it unsuitable for SDL. It’s not defined in C89, only C99+, and
    some compilers may choose not to implement it.
    Some research would need to be done before using it.

  • A struct containing two 32bits integers, one storing nanoseconds,
    one storing seconds. It loops after 136 years.
    Using this would complicate the interface, as it would require
    functions to compare time values, make some arithmetic
    operations on them etc… But it would be widely supported and have
    consistent behavior.

  • A 32 bits integer for the number of ticks, and another for the
    number of ticks in a second. This approach has two problems :
    The first is the lack of visibility on the bouds. The time required
    for the time to loop is implementation dependent, and cannot
    be advertised clearly in the manual. But also, that time would
    decrease with higher performance timers. Microsecond granularity
    would loop after an hour, and nanosecond granularity would loop after
    only 4 seconds.

I’m all for the 64 bits integer, but it would require one of two
strategies for SDL :
The first would be to say that SDL 1.3 is only for platforms and
compilers that support a 64bits integer implementation, excluding
all older platforms/compilers from future SDL upgrade.
The second strategy would be to hide the high precision interface on
older platforms/compilers, resulting in compile errors when
compiling code that uses the high performance API. It needs to be
advertised in the documentation that using this API will prevent
any future porting to a list of platforms/compilers.

If none of these strategies are possible, then I’d suggest to use the
struct with two integers. This will result in a complicated API, but
will work nicely.

As a newcomer, I can’t make that kind of choice, so I’ll be waiting
for a decision before I start working on this. Who is supposed to make
that kind of decisions ? Sam ?

Rewpparo

2010/11/29 Andreas Schiffler<@Andreas_Schiffler>

This topic was discussed before:
http://forums.libsdl.org/viewtopic.php?t=5333&sid=675478da48259e9fcf202a8ea79e1506

You may be able to use the *NIX timing function gettimeofday, with a potential nanosecond accuracy. On Linux/Intel one can also use some assembly to read the processor Time Stamp Counter. See post at end of this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=106276

On Win32 you can use the following two API calls from Kernel32.dll (Winbase.h. i.e. include Windows.h) to get the nanosecond-resolution performance counters of the CPU. See also http://support.microsoft.com/kb/172338

//The counter parameter to store the retrieved value in.
//A flag indicating if the counter was set in the parameter.

BOOL WINAPI QueryPerformanceCounter(
__out LARGE_INTEGER *lpPerformanceCount
);

///The frequency parameter to store the retrieved value in.
///A flag indicating if the frequency was set in the parameter.

BOOL WINAPI QueryPerformanceFrequency(
__out LARGE_INTEGER *lpFrequency
);

One would calculate the number of seconds since the system was started as follows.
double Seconds()
{
return (double)Count/(double)Frequency;
}

A good summary of these approaches with sample code is here:
http://www.songho.ca/misc/timer/timer.html

Hope that helps to get you started.
–Andreas

On 11/29/10 3:41 AM, Jean-Fran?ois S?v?re wrote:

Hi,
It’s my first message to the list, pleased to meet everyone. I’m Rewpparo, hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API (multi screen, multi windows),
and decided to use SDL again. I was until recently working on my own portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface. Millisecond granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those, maybe less.
I saw in the roadmap that higher precision is in preparation, but nothing seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability) with windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to contribute that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits integers that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits integers can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build system, as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to tell it what to compile.

Rewpparo


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Might be worth setting it up as a high res timer library, which would
make it optional for developers, and wouldn’t really cause any
backwards-compatibility issues with older hardware.

2010/11/29 Jean-Fran?ois S?v?re :> Thanks for the link. I’ve already been working on portable high

resolution timers at least on windows and Linux.
That’s why I’m offering my services. And also because I need it :slight_smile:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime?can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

My questions would be more oriented towards the kind of API that would
be adequate, especially time encoding.
Several options are available and used on different platforms:

  • A 64 bits integer storing nanoseconds may be the best choice. It
    loops after 586 years, relatively easy to use, even though
    some precautions need to be taken if you don’t want to loose
    precision. However it may not be available on some older platforms,
    making it unsuitable for SDL. It’s not defined in C89, only C99+, and
    some compilers may choose not to implement it.
    Some research would need to be done before using it.

  • A struct containing two 32bits integers, one storing nanoseconds,
    one storing seconds. It loops after 136 years.
    Using this would complicate the interface, as it would require
    functions to compare time values, make some arithmetic
    operations on them etc… But it would be widely supported and have
    consistent behavior.

  • A 32 bits integer for the number of ticks, and another for the
    number of ticks in a second. This approach has two problems :
    The first is the lack of visibility on the bouds. The time required
    for the time to loop is implementation dependent, and cannot
    be advertised clearly in the manual. But also, that time would
    decrease with higher performance timers. Microsecond granularity
    would loop after an hour, and nanosecond granularity would loop after
    only 4 seconds.

I’m all for the 64 bits integer, but it would require one of two
strategies for SDL :
The first would be to say that SDL 1.3 is only for platforms and
compilers that support a 64bits integer implementation, excluding
all older platforms/compilers from future SDL upgrade.
The second strategy would be to hide the high precision interface on
older platforms/compilers, resulting in compile errors when
compiling code that uses the high performance API. It needs to be
advertised in the documentation that using this API will prevent
any future porting to a list of platforms/compilers.

If none of these strategies are possible, then I’d suggest to use the
struct with two integers. This will result in a complicated API, but
will work nicely.

As a newcomer, I can’t make that kind of choice, so I’ll be waiting
for a decision before I start working on this. Who is supposed to make
that kind of decisions ? Sam ?

Rewpparo

2010/11/29 Andreas Schiffler

This topic was discussed before:
http://forums.libsdl.org/viewtopic.php?t=5333&sid=675478da48259e9fcf202a8ea79e1506

You may be able to use the *NIX timing function gettimeofday, with a potential nanosecond accuracy. On Linux/Intel one can also use some assembly to read the processor Time Stamp Counter. See post at end of this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=106276

On Win32 you can use the following two API calls from Kernel32.dll (Winbase.h. i.e. include Windows.h) to get the nanosecond-resolution performance counters of the CPU. See also http://support.microsoft.com/kb/172338

// The counter parameter to store the retrieved value in.
// A flag indicating if the counter was set in the parameter.

BOOL WINAPI QueryPerformanceCounter(
? __out??LARGE_INTEGER *lpPerformanceCount
);

/// The frequency parameter to store the retrieved value in.
/// A flag indicating if the frequency was set in the parameter.

BOOL WINAPI QueryPerformanceFrequency(
? __out??LARGE_INTEGER *lpFrequency
);

One would calculate the number of seconds since the system was started as follows.
double Seconds()
{
? return (double)Count/(double)Frequency;
}

A good summary of these approaches with sample code is here:
http://www.songho.ca/misc/timer/timer.html

Hope that helps to get you started.
–Andreas

On 11/29/10 3:41 AM, Jean-Fran?ois S?v?re wrote:

Hi,
It’s my first message to the list, pleased to meet everyone. I’m Rewpparo, hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API (multi screen, multi windows),
and decided to use SDL again. I was until recently working on my own portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface. Millisecond granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those, maybe less.
I saw in the roadmap that higher precision is in preparation, but nothing seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability) with windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to contribute that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits integers that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits integers can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build system, as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to tell it what to compile.

Rewpparo


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Actually you’re right.
On the API I have so far, I had a function to change it to a double
for easy handling when precision
is not an issue. I had just assumed that precision would decrease too
much when time values get higher.
I’ve just done the math : We loose nanosecond granularity with a
double when the time measured is over 52 days. I believe that’s
acceptable.

For timer resolution, I already have an algorithm that does just that.
It’s in my unit tests, as I think it’s usefull
for debugging, but not really at runtime. I may add it as a sample to
demonstrate.

2010/11/29 Andreas Schiffler :>

How about a new API that returns a “double” … in terms of resolution you’d
get the 52bits of the fractional part and the type should be available on
every platform. The original API can be maintained and its value is easily
converted from this double (i.e. simply returning a 32bit integer with a
lower resolution).

I’d also be for an API call that returns the estimated resolution of the
timer. It could be measured (and the value cached) during the Init phase.

–Andreas

On 11/29/10 8:53 AM, Jean-Fran?ois S?v?re wrote:

Thanks for the link. I’ve already been working on portable high
resolution timers at least on windows and Linux.
That’s why I’m offering my services. And also because I need it :slight_smile:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

My questions would be more oriented towards the kind of API that would
be adequate, especially time encoding.
Several options are available and used on different platforms:

  • A 64 bits integer storing nanoseconds may be the best choice. It
    loops after 586 years, relatively easy to use, even though
    some precautions need to be taken if you don’t want to loose
    precision. However it may not be available on some older platforms,
    making it unsuitable for SDL. It’s not defined in C89, only C99+, and
    some compilers may choose not to implement it.
    Some research would need to be done before using it.

  • A struct containing two 32bits integers, one storing nanoseconds,
    one storing seconds. It loops after 136 years.
    Using this would complicate the interface, as it would require
    functions to compare time values, make some arithmetic
    operations on them etc… But it would be widely supported and have
    consistent behavior.

  • A 32 bits integer for the number of ticks, and another for the
    number of ticks in a second. This approach has two problems :
    The first is the lack of visibility on the bouds. The time required
    for the time to loop is implementation dependent, and cannot
    be advertised clearly in the manual. But also, that time would
    decrease with higher performance timers. Microsecond granularity
    would loop after an hour, and nanosecond granularity would loop after
    only 4 seconds.

I’m all for the 64 bits integer, but it would require one of two
strategies for SDL :
The first would be to say that SDL 1.3 is only for platforms and
compilers that support a 64bits integer implementation, excluding
all older platforms/compilers from future SDL upgrade.
The second strategy would be to hide the high precision interface on
older platforms/compilers, resulting in compile errors when
compiling code that uses the high performance API. It needs to be
advertised in the documentation that using this API will prevent
any future porting to a list of platforms/compilers.

If none of these strategies are possible, then I’d suggest to use the
struct with two integers. This will result in a complicated API, but
will work nicely.

As a newcomer, I can’t make that kind of choice, so I’ll be waiting
for a decision before I start working on this. Who is supposed to make
that kind of decisions ? Sam ?

Rewpparo

2010/11/29 Andreas Schiffler

This topic was discussed before:

http://forums.libsdl.org/viewtopic.php?t=5333&sid=675478da48259e9fcf202a8ea79e1506

You may be able to use the *NIX timing function gettimeofday, with a
potential nanosecond accuracy. On Linux/Intel one can also use some assembly
to read the processor Time Stamp Counter. See post at end of this thread:
http://www.gamedev.net/community/forums/topic.asp?topic_id=106276

On Win32 you can use the following two API calls from Kernel32.dll
(Winbase.h. i.e. include Windows.h) to get the nanosecond-resolution
performance counters of the CPU. See also
http://support.microsoft.com/kb/172338

//The counter parameter to store the retrieved
value in.
//A flag indicating if the counter was set in the
parameter.

BOOL WINAPI QueryPerformanceCounter(
? __out ?LARGE_INTEGER *lpPerformanceCount
);

///The frequency parameter to store the
retrieved value in.
///A flag indicating if the frequency was set in the
parameter.

BOOL WINAPI QueryPerformanceFrequency(
? __out ?LARGE_INTEGER *lpFrequency
);

One would calculate the number of seconds since the system was started as
follows.
double Seconds()
{
? return (double)Count/(double)Frequency;
}

A good summary of these approaches with sample code is here:
http://www.songho.ca/misc/timer/timer.html

Hope that helps to get you started.
–Andreas

On 11/29/10 3:41 AM, Jean-Fran?ois S?v?re wrote:

Hi,
It’s my first message to the list, pleased to meet everyone. I’m
Rewpparo, hobbyist game developer.
I only recently took a look at 1.3, and I love the new video API (multi
screen, multi windows),
and decided to use SDL again. I was until recently working on my own
portability layer, but now that SDL
fits my needs, I can focus on other things. Great work !

One thing still bugs me though, the time measurement interface.
Millisecond granularity isn’t quite enough,
as a main loop operating at decent speed will last about 15 of those,
maybe less.
I saw in the roadmap that higher precision is in preparation, but nothing
seems to be in hg tip.
I have a nanosecond granularity system (providing backend capability)
with windows/linux implementations
that can be adapted to SDL standards easily.I’d be willing to contribute
that, if such a contribution is welcome.
I’ll probably need to redo the API, as I’m currently using 64 bits
integers that may not be supported on all of
SDL’s target platforms. A double precision float and/or two 32 bits
integers can be used if they seem more
appropriate.
Should I work on this, I’ll also need some pointers on SDL’s build
system, as I’ve only been using raw makefiles,
VC projects, and Cmake (which rocks). I’m a bit lost when I have to tell
it what to compile.

Rewpparo


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Quoth Jean-Fran?ois S?v?re , on 2010-11-29 17:53:56 +0100:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime?can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

What is gethrtime? The Web suggests that this is either an RTLinux
extension or a Solaris call. ? objdump -T /lib/librt.so.1 ? on my
Debian GNU/Linux sid machine yields no matches.

AFAIK, the POSIX high-precision timer API provided by librt on modern
GNU/Linux is the clock_* and timer_* functions, particularly (in this
case) clock_gettime. These theoretically should be supported by other
Unixes that target modern POSIX as well, but the Web has it that (for
instance) Mac OS X doesn’t do that and instead prefers that you call
Mach directly, so they’re not universal.

As a datapoint, in an SDL-based program I have partly written that
uses high-precision timers on Linux, I use clock_gettime and then a
timerfd and an eventfd. (I haven’t ported to anything else; if I were
to do so, my vague plan would be to use clock_gettime + kqueue on most
BSDs, maybe Mach calls on Mac OS X, and I’m not sure what on Windows.)
Since I’m using C99, I transform between struct timeval and uint64_t
internally. (Technically that would ideally be uint_least64_t, but I
only target machines that have the exact 8/16/32/64-bit integer types
available.)

—> Drake Wilson

Quoth Andreas Schiffler , on 2010-11-29 09:05:19 -0800:

I’d also be for an API call that returns the estimated resolution of
the timer. It could be measured (and the value cached) during the
Init phase.

Measuring explicitly may be less likely to be full of lies depending
on the underlying system, but the platform call for this on modern
POSIX is theoretically clock_getres, FWIW. (On my desktop GNU/Linux
machine it declares 1 ns resolution for the monotonic and realtime
clocks.)

—> Drake Wilson

About gethrtime :
http://docs.sun.com/app/docs/doc/816-5168/gethrtime-3c?l=en&n=1&a=view
I’m refering to the RTLinux extention. As I said limited scope, but
true nanosecond.

clock_gettime looks great, I hadn’t heard about it before, I wonder
why. I’ll test it soon
http://linux.die.net/man/3/clock_gettime

2010/11/29 Drake Wilson :> Quoth Jean-Fran?ois S?v?re <@Jean-Francois_Severe>, on 2010-11-29 17:53:56 +0100:

But gettimeofday only provides microsecond granularity, as it returns
a number of microseconds and seconds.
gethrtime?can provide nanosecond granularity on a Linux kernel
compiled with -rt option, and it has true nanosecond
precision if the processor is over 1Ghz. Limited scope, but so
powerful it would be a waste not to use it.

What is gethrtime? ?The Web suggests that this is either an RTLinux
extension or a Solaris call. ?? objdump -T /lib/librt.so.1 ? on my
Debian GNU/Linux sid machine yields no matches.

AFAIK, the POSIX high-precision timer API provided by librt on modern
GNU/Linux is the clock_* and timer_* functions, particularly (in this
case) clock_gettime. ?These theoretically should be supported by other
Unixes that target modern POSIX as well, but the Web has it that (for
instance) Mac OS X doesn’t do that and instead prefers that you call
Mach directly, so they’re not universal.

As a datapoint, in an SDL-based program I have partly written that
uses high-precision timers on Linux, I use clock_gettime and then a
timerfd and an eventfd. ?(I haven’t ported to anything else; if I were
to do so, my vague plan would be to use clock_gettime + kqueue on most
BSDs, maybe Mach calls on Mac OS X, and I’m not sure what on Windows.)
Since I’m using C99, I transform between struct timeval and uint64_t
internally. ?(Technically that would ideally be uint_least64_t, but I
only target machines that have the exact 8/16/32/64-bit integer types
available.)

? —> Drake Wilson


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2010/11/29 Drake Wilson :

Quoth Andreas Schiffler , on 2010-11-29 09:05:19 -0800:

I’d also be for an API call that returns the estimated resolution of
the timer. It could be measured (and the value cached) during the
Init phase.

Yeah that’s what I’m doing too

Measuring explicitly may be less likely to be full of lies depending
on the underlying system, but the platform call for this on modern
POSIX is theoretically clock_getres, FWIW. ?(On my desktop GNU/Linux
machine it declares 1 ns resolution for the monotonic and realtime
clocks.)
The resolution they declare is most of the time granularity, meaning
numerical precision allowed theorically by the implmentation, not precision,
meaning minimum practical measure of time between two calls.>
? —> Drake Wilson


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org