Framerate counter

I might make sense to run your simulation n times per second, but you don’t
need to display it at more than 60 FPS. So you can do:

I already don’t. I have two separate threads, like I have described in a
recent mail to this mailing list…

while(1) {
thistime = SDL_GetTicks();
while(lasttime < thistime) {
lasttime += 1 / SIMULATION_RATE;
runSimulation(lasttime);
/* thistime = SDL_GetTicks(); */
}
displayResult(lasttime);
}

This way your simulation runs at a constant rate.

Ouch! That doesn’t look good. I use threads, much simpler. Also,
Sitting in a while loop runs as fast as possible, not constant rate.

Quick repeat: My main thread is like this:

simulation_timer = SDL_AddTimer(
	SIMULATION_INTERVAL_MS,
	SimulationTimer::callb,
	this
);
while( true ){
	redraw();
	poll_events();
}

and my simulation thread looks like this:

/static/ Uint32 SimulationTimer::callb( Uint32 interval, void *param ){
UI ui = (UI)( param );

sys_time += frame_age =	sync.Passed();  //  Update timers
ui->applyPlayerControl( frame_age );    //  Update player control
ui->getSystem()->update();              //  Simulate rigid bodies
sync.Update();                          //  Update timer

return interval;

}

I could, of course, assume in the simulatio timer that it is being
executed at SIMULATION_INTERVAL_MS rate. But in reality it isn’t,
so I actually measure the time between updates. This way i can be
absolutely sure that the simulation runs in correct time.

– Timo Suoranta – @Timo_K_Suoranta

Sure (on x86 at least); it’s a CPU core feature of the x86. It’s a 64 bit
counter that counts core clock cycles.

inline unsigned long long int rdtsc(void)
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

For conversion into more useful units, you can do something like this:

f = fopen("/proc/cpuinfo",“r”);
if(f)
{
while(1)
{
if(!fgets(s1, 100, f))
break;
if(!memcmp(s1, “cpu MHz”, 7))
{
cpu_hz = atof(&s1[10])*1000000.0;
break;
}
}
fclose(f);
}

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 05 April 2001 15:27, Timo K Suoranta wrote:

timing information. Which is provided by performance counter (is it
available in linux in any form?).

I actually used this in Tribes 2. In order for this to work properly you
need to precisely calculate the core clock cycle frequency. I worked out
a solution that gives very high accuracy, but it isn’t perfect. (Essentially
the same method used by the Linux kernel on startup) This also has problems
on SMP machines, since the cycle counter can drift between CPU’s, and you’ll
get jitter.

I have the code to add to SDL_GetTicks(), and I’ll add it experimentally in
SDL 1.3

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software> On Thursday 05 April 2001 15:27, Timo K Suoranta wrote:

timing information. Which is provided by performance counter (is it
available in linux in any form?).

Sure (on x86 at least); it’s a CPU core feature of the x86. It’s a 64 bit
counter that counts core clock cycles.

greater than 1000Hz : DSP on audio; high-speed simulations; sampling from
some devices; networking transformations

Any realtime simulation IMHO. It is best to have the simulation work at
something like 35 - 400 simulation frames per second, and since current
machine easily can handle 400 or even more sfps, why not? Performance
counter in win32 gives sort of preciside time measurements AFAIK.

performance counters are CPU-specific if you want accuracy (pentium-II and
newer, and some clone chips) (RDTSC)

realtime simulation? You should decouple front end (while some -monitors-
can go >200fps, most top off at 75-90 fps at the -most-). And sample from
the simulation into the display - that way you don’t have to worry about
frames per second problems.

I’m not sure if a high-response (emphasis on fast response) system is
needed beyond 5ms (==5000 microseconds) for SDL… requirements beyond

I am not looking for fast response. It just happens that I need to be
able to measure things that can take less time than 1 ms. Computers
keep getting faster and faster, and there is a chance that I can run
some simulation 10000 times per seconds. And the simulation might need
timing information. Which is provided by performance counter (is it
available in linux in any form?). So I don’t see why SDL couldn’t let
me use as precise timing info as possible.

I use floating point data as well. Mostly doubles though.

Okay - if you were looking for fast response, SDL is not the way - not if
SDL cannot respond fast enough for you anyways.
As you’re looking for high resolution - you’re in the realms of either
tracking your own time (I could explain but it’ll take some space) or of
being dependant on CPU/OS-specific measurements such as RDTSC or the linux
high-precision clock. Either which way I’ll venture to say you shouldn’t
be depending on display framerate for your simulation…

Anyhoo, hope this helps…
G’day, eh? :slight_smile:
- TeunisOn Thu, 5 Apr 2001, Timo K Suoranta wrote:

Gltron and TuxRacer appear to. Again, this is for adjusting speed/rotation
and/or increasing accuracy to accomodate the difference in rendering one
frame of animtion. The former case is what I use it for…

Can you post a small segment of code thats more optimal?

Thanks,

Matt

I say the microsecond… reason is if your sampling differences between
frames of animation to adjust accuracy or quantity of
movement/rotation/etc,

the accumulated results will more closely match the real frameframes of
your> >engine.

nobody would sample difference between frames that way

Are you sure about this bit of code? These are the results I got using the
following little experiment program:

#include <stdio.h>
#include <unistd.h>

inline unsigned long long int rdtsc()
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

int main()
{
int i;
unsigned long long int x;

x = rdtsc();
sleep(1);
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

x = rdtsc();
sleep(2);
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

x = rdtsc();
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

return 0;

}

~/src$ g++ testr.cc
~/src$ a.out
406658 ticks
406659 ticks
406659 ticksOn Thursday 05 April 2001 09:00, you wrote:

On Thursday 05 April 2001 15:27, Timo K Suoranta wrote:

timing information. Which is provided by performance counter (is it
available in linux in any form?).

Sure (on x86 at least); it’s a CPU core feature of the x86. It’s a 64 bit
counter that counts core clock cycles.

inline unsigned long long int rdtsc(void)
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

Unless you posted code you weren’t using, your printfs are incorrect
because you’re not passing ‘i’ to be substituted where ‘%d’ is…–

Olivier A. Dagenais - Software Architect and Developer

“Jason Hoffoss” wrote in message
news:01040513183801.28494 at jhoffoss…

On Thursday 05 April 2001 09:00, you wrote:

On Thursday 05 April 2001 15:27, Timo K Suoranta wrote:

timing information. Which is provided by performance counter (is
it

available in linux in any form?).

Sure (on x86 at least); it’s a CPU core feature of the x86. It’s a
64 bit

counter that counts core clock cycles.

inline unsigned long long int rdtsc(void)
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

Are you sure about this bit of code? These are the results I got
using the
following little experiment program:

#include <stdio.h>
#include <unistd.h>

inline unsigned long long int rdtsc()
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

int main()
{
int i;
unsigned long long int x;

x = rdtsc();
sleep(1);
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

x = rdtsc();
sleep(2);
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

x = rdtsc();
x = rdtsc() - x;
i = x;
printf("%d ticks\n");

return 0;

}

~/src$ g++ testr.cc
~/src$ a.out
406658 ticks
406659 ticks
406659 ticks

Hey, waitaminnit! I didn’t even read the code properly… heh

printf("%d ticks\n");
                    ^^^

THIS is where the problem is; you’re not passing the int argument. :slight_smile:

(And it’s a 32 bit int allright. Forget about the “%lld” nonsense in this
context. :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’

timing information. Which is provided by performance counter (is it
available in linux in any form?).

Sure (on x86 at least); it’s a CPU core feature of the x86. It’s a 64 bit
counter that counts core clock cycles.

inline unsigned long long int rdtsc(void)
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

Are you sure about this bit of code? These are the results I got using the
following little experiment program:

Yep; it has worked on all gcc versions I’ve tried so far…

#include <stdio.h>
#include <unistd.h>

inline unsigned long long int rdtsc()
{
unsigned long long int x;
asm volatile (".byte 0x0f, 0x31" : “=A” (x));
return x;
}

int main()
{
int i;
unsigned long long int x;

x = rdtsc();
sleep(1);
x = rdtsc() - x;
i = x;
printf("%d ticks\n");
          ^^

This is the problem, I think; printf expects to find a 32 bit int, rather
than the 64 bit int you’re trying to print. You should “%lld”, IIRC.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 05 April 2001 22:18, Jason Hoffoss wrote:

On Thursday 05 April 2001 09:00, you wrote:

On Thursday 05 April 2001 15:27, Timo K Suoranta wrote:

Wow, you’re right. Didn’t even notice that. Even more surprisingly, neither
did g++. Guess you need -Wall in there to catch that. Thanks.On Thursday 05 April 2001 13:59, you wrote:

Unless you posted code you weren’t using, your printfs are incorrect
because you’re not passing ‘i’ to be substituted where ‘%d’ is…

performance counters are CPU-specific if you want accuracy (pentium-II and
newer, and some clone chips) (RDTSC)

Sure. Still there is no reason why not we could not have portable API
which can give you best resolution available. You could also query
the resolution. The fact that implementation of something is plaform
spesific is irrelevant - we can have standard API for precise timers
in SDL. More or less precise, but more is always better :slight_smile:

realtime simulation? You should decouple front end (while some -monitors-
can go >200fps, most top off at 75-90 fps at the -most-). And sample from
the simulation into the display - that way you don’t have to worry about
frames per second problems.

But I have decoupled simulation and display.

And if I want to do something, say, 1000 timers per seconds, just let me
do that, OK? There are reasons to run simulation a number of times faster
than actual display - really.

I am not looking for fast response. It just happens that I need to be
able to measure things that can take less time than 1 ms. Computers

Okay - if you were looking for fast response, SDL is not the way - not if
SDL cannot respond fast enough for you anyways.

Look, I already said, I am not after fast response.

As you’re looking for high resolution - you’re in the realms of either
tracking your own time (I could explain but it’ll take some space) or of
being dependant on CPU/OS-specific measurements such as RDTSC or the linux
high-precision clock. Either which way I’ll venture to say you shouldn’t
be depending on display framerate for your simulation…

I have to kinds of frames; display frames, and simulation frames. The
simulation frame has nothingh to do with display frame. The simulation
frame contains information about objects state. That is what I want to
run at hundreds or thousands updates per second.

And I still would like to have API for 'as precise timers as possible’
in SDL. I have expressed reason for that. It is not even too hard to
implement.

Anyhoo, hope this helps…
G’day, eh? :slight_smile:

I thought I had already expressed myself as clearly as possible,
but I sort of failed. Maybe this helps, but I’m not so confident
any more :I

– Timo Suoranta – @Timo_K_Suoranta

performance counters are CPU-specific if you want accuracy (pentium-II and
newer, and some clone chips) (RDTSC)

Sure. Still there is no reason why not we could not have portable API
which can give you best resolution available. You could also query
the resolution. The fact that implementation of something is plaform
spesific is irrelevant - we can have standard API for precise timers
in SDL. More or less precise, but more is always better :slight_smile:

right - since the rest is covered, back to this…
a time kit for SDL dev… (I wonder if there’s a realtime scientific
simulation library? I poked around a wee bit but since it’s not on my
normal list of searches I didn’t find anything)

so requirements:
- Accurate monitoring of current time
-> The sampling of this need be no more often than needed to
maintain within a given tolerance.
(timed? from my own experiences, even gettimeofday() can
gain innacuracy… though I have to admit my CPU is
overclocked!)
- Ability to deliver time-deltas between frames
(maybe there’s a better way to describe?)
- 1 microsecond (or better) base measurement (ints ARE handier :slight_smile:
- the ability to detect when frames have been lost - and how many
- the ability to detect real latency in measurements (how?)
- the ability to detect the real coherency (samples/second)
-> perhaps using RDTSC under intel/pentium II+, how on
other architectures?

Any refinement?
The funny thing is that now this has been pulled back away from the
display (thanks for mentioning that btw - until the last spate of messages
I thought otherwise), the idea seemed doable. There’s no reliable way
to read the number of rendered frames/second on a graphics card - and they
can float depending on system/BUS load as well, so making sure of taking
advantage of system ways to keep jumpiness/artifacts to a minimum is
good… (eg. doublebuffering)

The other thing of course is that combined with some of the list code I
have some of the code to finish this… Anyone want to do an API header?
:slight_smile:

Oh, and why SDL? Well, SDL has handy threading architecture plus a nice
low-coherency timer that’s always present - it’s nice to have standards
and callbacks for those tricky bits in multiplatform coding :slight_smile:

G’day, eh? :slight_smile:
- TeunisOn Fri, 6 Apr 2001, Timo K Suoranta wrote:

  • Accurate monitoring of current time
    -> The sampling of this need be no more often than needed to
    maintain within a given tolerance.
    (timed? from my own experiences, even gettimeofday() can
    gain innacuracy… though I have to admit my CPU is
    overclocked!)

All I need i quering accurate time. I am not sure what you mean by
sampling. When CPU cycle counter is used, you can simply ask what
is value of the counter. The value is automatically update, and you
only need to read the value when user asks.

  • Ability to deliver time-deltas between frames
    (maybe there’s a better way to describe?)

This can be done by user. Calculating delta between two times
is not that difficult.

  • 1 microsecond (or better) base measurement (ints ARE handier :slight_smile:

I don’t like the idea of fixed unit. I would like interface where you
can ask how long one unit is, and the unit can be variable. Either that,
or use floating points and seconds as 1. I use floats. I do not know
any reason why not use floats.

  • the ability to detect when frames have been lost - and how many

All I need is ability to get the current time. Frame is something the
library shouldn’t care about. I do use SDL timer so that it tries to
run at fixed intervals. The simulation will make add simulation time
a fixed amount even if simulation timer fails to run at that speed.
Effectively this means that the simulation slows down, but for me it
is acceptable. I could, if I wanted to, notice when time delta between
frames (or several frames) indicates that frames are lost, and do
something about that. It is easy to find out lost frames just by
using the accurate time.

  • the ability to detect real latency in measurements (how?)

This is not possible; if we provide the most precise timer in the
system, we have no tools to measure the latency in any additional
precision. But there is very little latency when using cpu cycle
counters; it is the correct when you read it, best possible estimation
of time at that moment. If for some reason your process is scheduled to
wait right after you have read the value, it won’t hurt much, because this
will count to the age of next simulation frame.

So I expect that SDL timer does it’s best to run my timer function at
fixed intervals, but I accept it being slightly inaccurate. I can always
measure the real time between SDL timer callbacks. Though, I currently
use this only for fps counter. The actual simulation updates the simulated
time at fixed intervals; In fact, discrete incremental simulation does not
work correctly unless you make it that way.

It is by design that the simulation timer thread is light enough to be
run as often as required, and that it runs more often than screen updates.
If you have problems running the simulation at some speed, you should
measure the overall system performance, and choose the correct simulation
interval, which is fixed once you have chosen it. I don’t skip frames if
I am behind schedule; the simulation just slows down.

  • the ability to detect the real coherency (samples/second)
    -> perhaps using RDTSC under intel/pentium II+, how on
    other architectures?

I guess you mean quering timer resolution? Yes, that would be handy.

I thought otherwise), the idea seemed doable. There’s no reliable way
to read the number of rendered frames/second on a graphics card - and they
can float depending on system/BUS load as well, so making sure of taking

I can count number of frames I have drawn during one second. I can
measure the exact time spent for this. And I would call my method
reliable - assuming I have the precise time available of course.

– Timo Suoranta – @Timo_K_Suoranta