Framerate counter

My understanding, though, has been that SDL_Delay() is not
necessarily stable w.r.t. to system load changes – that
the time requested is a minimum time.

necessarily so with a time-sharing kernel unless you use absolute
priorities (which you can do in Linux, Solaris and other systems by
the means of Posix.1b real-time scheduling (see sched_setscheduler())

busy-waiting for short intervals is often an adequate solution, but
SDL_Delay() doesn’t do it (since a user might want to use the
wait cycles for something more productive). As you noticed you can
synthesize a good hybrid delay from SDL_Delay() and SDL_GetTicks()

Also, 25 fps is only 40ms and 50 is 20ms, so, with a 10ms
resolution timer, you could have a lot of aliasing in the
frame rate (especially, if, like me, you’re probably going
to be running close to using up the available time on the
low-end platforms anyway).

definitely, and although it would be very nice to have finer granularity
timers available we can’t count on it for quite some time. This isn’t
much of a problem if you are prepared to use all available CPU for your
game anyway but can be a hassle in some cases

I wanted to do exactly the same thing for a game I wrote under linux. Below
is the class (struct) I wrote to handle doing it. It works pretty well. It
generally gets pretty close to the target framerate, though never exactly on
it really. In linux, with only ~10ms granularity in timing, and shifting CPU
loads due to other processes, I figure it’s nearly impossible to get exactly
at the target framerate. Also, my frameload varies depending on what gets
drawn on the screen, but this compensates for that pretty well too. This
should also be very portable.

To use it, declare one instance of Delay, and then call the exec() method
once a frame. Enjoy.

int peak_update_areas = 0;
SDL_Rect update_areas[MAX_UPDATE_AREAS];

#define INITIAL_DELAY 0xdffff
#define MAX_DELAY_ENTRIES 15
#define TARGET_FRAMERATE 60.0f

Uint32 ticks = 0;
int frames = 0;

struct Delay {
int delay;
int last_frames;
Uint32 last_ticks;
int num_delay_entries;
int delay_entry_index;
int delay_entries[MAX_DELAY_ENTRIES];

Delay()
{
    reset();
}

void reset()
{
    delay = INITIAL_DELAY;
    last_frames = frames = 0;
    last_ticks = 0;
    num_delay_entries = -1;
    delay_entry_index = 0;
}

void exec()
{
    int i = delay;

    while (i--);

    frames++;
    ticks = SDL_GetTicks();
    if (ticks != last_ticks) {
        int dt, df;
        float f, fps = 0.0f;

        dt = ticks - last_ticks;
        df = frames - last_frames;
        last_ticks = ticks;
        last_frames = frames;

        // skip first entry, as it will only be partial
        if (num_delay_entries < 0) {
            num_delay_entries = 0;
            return;
        }

        fps = (float) df * 1000.0f / (float) dt;
        if ((delay == 0) && (fps > 60.0f))
            delay = INITIAL_DELAY;

        f = (float) delay * fps / TARGET_FRAMERATE;

        // record a new delay entry
        delay_entries[delay_entry_index] = (int) ((f < 0) ? 0 : f);

        if (++delay_entry_index == MAX_DELAY_ENTRIES)
            delay_entry_index = 0;

        if (num_delay_entries < MAX_DELAY_ENTRIES)
            num_delay_entries++;

        // calculate delay as average of delay entries
        delay = 0;
        for (i=0; i<num_delay_entries; i++)
            delay += delay_entries[i];

        delay /= num_delay_entries;
    }
}

};On Wednesday 04 April 2001 05:53, you wrote:

Hi there!

I’ve tried to make a framerate counter as wel as a framerate limiter (so
the game doesn’t run faster than say, 50fps), but i’ve failed in doing so.
I was wondering if anyone has already done this before, and would care to
share the source of it with me.

Yes, profiling would definately be a good thing. However, from what I
understand, you can’t get more precise than ~10ms under linux. If there is a
way, I’d sure be interested in knowing about it.On Wednesday 04 April 2001 03:44, you wrote:

On Wed, Apr 04, 2001 at 12:28:55PM +0200, Mattias Engdeg?rd wrote:

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

Profiling?

> since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks() > should suffice for this. We could add an API for higher-resolution timing > but I haven't seen a really compelling argument for it yet

If anyone does need a higher resolution for their timing, they ought to check
out SGE (I forget the URL, but it’s on the SDL libraries list).

I haven’t looked in a while, but last I check I could have sworn they had a
higher res timer that was still portable.On Wed, 04 Apr 2001, you wrote:


Sam “Criswell” Hart <@Sam_Hart> AIM, Yahoo!:
Homepage: < http://www.geekcomix.com/snh/ >
PGP Info: < http://www.geekcomix.com/snh/contact/ >
Advogato: < http://advogato.org/person/criswell/ >

Also, 25 fps is only 40ms and 50 is 20ms, so, with a 10ms
resolution timer, you could have a lot of aliasing in the
frame rate (especially, if, like me, you’re probably going
to be running close to using up the available time on the
low-end platforms anyway).

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

Of course, those are wasted CPU cycles.

On many OS’s, if you call sleep(0), it will simply give up your program’s
timeslice, without taking it off the runqueu.

Instead of putting your program into a busy loop, your better off like
this:

while (its not time yet)
	sleep(0);

Of course, depending on how long you need to wait, you might even be best
off as
while (not done yet) {
if (time left > X)
sleep(time left / 2);
else
sleep(0);
}

Where X is some small constant, depending on what you want for a

margin of error.

Of course, this isn't as portable (windows calls this "Sleep"

instead of “sleep”), but I’m not sure if SDL will actually call sleep if
you specify a zero interval…

And you save your CPU-cycles (which also means that the window manager / X
/ Windows has more time to collect events and pass them to your program,
making things smoother in your program itself more responsive than a
tight loop?)

What do you think?

-Dan> I haven’t actually implemented this yet, so if there’s

any reason I shouldn’t, this would be a good time for
me to find out about it. :slight_smile:

Thanks!

Terry Hancock
hancock at earthlink.net


“The two most abundant things in the universe are Hydrogren and
stupidity.”
- Harlan Ellison

[…]

David, you don’t need to reply. :slight_smile:

Thank you. (Oops… Sorry! :wink:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Wednesday 04 April 2001 21:27, Sam Lantinga wrote:

David Olofson wrote:> On Wednesday 04 April 2001 21:50, Terry Hancock wrote:

[…]

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

The problem (on some platforms at least) is that the scheduler will see your
application as a hard working CPU hog, that needs to be preemptively
scheduled out occasionally not to freeze the system. This is where you can
lose control entirely; you’ll be robbed of the CPU for an undefined amount of
time, with no way of getting back in before the scheduler decides the other
tasks have had enough time to stay alive…

Ah. That would be annoying. So I should probably try to
use SDL_Delay() for at least some of the interval so my
app will “play nice” :).

Thanks. Like I said, this is the time to decide. :slight_smile:


Terry Hancock
@Terry_Hancock

On many OS’s, if you call sleep(0), it will simply give up your program’s
timeslice, without taking it off the runqueu.

this is traditional undocumented unix behaviour which you cannot and
should not rely on (and indeed often doesn’t work on anymore)

And good riddance. There are usually much better ways to do what
you want to accomplish, once you have understood what your problem
really is

please state exactly what precision you need (and why), and we can
make an attempt to provide that in a platform-independent way so people
don’t need to resort to unportable hacks like that you mention

I don’t see a reason why SDL should limit the precision to something
every platform could support, when some platforms have more precise
timers than others; just let the user have the best precision available.

For the physics simulation - updating object positions and rotations -
I need to measure the exact time spent in that frame. On fast machines
it turned out that SDL timer precision was not enough; simulation could
run so fast that the time spent in one animation step was not quite
precise enough, and resulted jerky animation.

– Timo Suoranta – @Timo_K_Suoranta

Are you building with profiling enabled?
Is Project->Settings->Link->Enable Profiling checked?

Yes. Also map file generation is enabled.

Just a wild guess: What version of visual C++ do you have?
I think profiling is only enabled in the professional/enterprise/whatever
edition.

Visual C++ 6 Professional. Profiling never worked before or after
installing visual studio servicepack 5.

This is sort of weird; I have the very same problem at my work
machine. But on my friends at the same work profiling works. I just
don’t know what prevents it on my setups…

– Timo Suoranta – @Timo_K_Suoranta

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()

Also, 25 fps is only 40ms and 50 is 20ms, so, with a 10ms
resolution timer, you could have a lot of aliasing in the

I got my program running at over 1000 fps. Guess what sdl timer
resolution thought about that…

(…actually it was my friend with P4 @ 1.5GHz and a nice GeForce…)

– Timo Suoranta – @Timo_K_Suoranta

For the physics simulation - updating object positions and rotations -
I need to measure the exact time spent in that frame. On fast machines
it turned out that SDL timer precision was not enough; simulation could
run so fast that the time spent in one animation step was not quite
precise enough, and resulted jerky animation.

My question remains: For what application is timekeeping better than
1/1000 s needed, and how high resolution is needed? microsecond?
nanosecond?

I’m not saying it isn’t needed; I just want clear quantified answers
to be able to design the interface better

I got my program running at over 1000 fps. Guess what sdl timer
resolution thought about that…

I doubt you did, or you have a very interesting display (hardly a CRT)

For the physics simulation - updating object positions and rotations -
I need to measure the exact time spent in that frame. On fast machines
it turned out that SDL timer precision was not enough; simulation could
run so fast that the time spent in one animation step was not quite
precise enough, and resulted jerky animation.

My question remains: For what application is timekeeping better than
1/1000 s needed, and how high resolution is needed? microsecond?
nanosecond?

greater than 1000Hz : DSP on audio; high-speed simulations; sampling from
some devices; networking transformations

less than 5ms resolution: (ms=millisecond; 1/1000 of a second)
interrupt handling (clue: this is RT|os stuff - not for libraries)
I/O polling | monitoring (somewhat device specific)
sound

less than 16ms resolution:
video refresh if one is feeling silly (16ms == 60fps)
(although displays may be faster by and large one can’t
see the changes… :slight_smile:
(if one is working with PAL(25fps/40ms) or NTSC(30fps/33ms) one
should be clean enough)

I’m not saying it isn’t needed; I just want clear quantified answers
to be able to design the interface better

I’m not sure if a high-response (emphasis on fast response) system is
needed beyond 5ms (==5000 microseconds) for SDL… requirements beyond
that generally are beyond the capabilities of the OS anyways unless it’s
a RT/OS. (or close, such as linux). As a subtle hint, SDL is used in
graphics and gaming, yes - but it’s also farely platform neutral. If one
wishes for resolution beyond this range, one should probably code it
specifically to an OS | system designed for the job and not for a
general-purpose system… although if one really wishes said to occur, I
suggest decoupling the display and specifically sampling the data results
into the display rather than depending on the display framerate.

If you really want comparison via OS, linux at the last point I checked
could respond to a timed event within 50 microseconds. Windows last I
checked (486/100; windows 95) could handle 10000microseconds (1/100
second). I suspect windows (and hardware) has improved since this time,
but YMMV, eh? :slight_smile:
(a RT/OS under a 486 -can- respond to a timed event in 45 microseconds -
this is the processor’s own response time on that hardware. This was
when linux -first- clocked a -sustained- 50ms response time)

I asked once before if anyone wanted an object for tracking the passing of
time against actions -> My solution is to find out how far an action
-should- be given a certain amount of time passing. No matter what the
FPS, unless it drops below the comfortable level (15fps) the system looks
pretty smooth. I haven’t tested the code since last December but at that
point it was using SDL_GetTicks() comparisons (and occasionally
SDL_Delay(…) if it was going too quickly) using -floating point- time
values rather than integer for my timed objects.

Maybe this description doesn’t make sense, and maybe it does. Hope it’s
of some use :slight_smile:

G’day, eh? :slight_smile:
- Teunis, who’s learned a little about time but has a hard time
with resolutions below half an hour…On Thu, 5 Apr 2001, Mattias Engdegard wrote:


What is courage now? Is it just to go until we’re done?
Men may call us heroes when they say we’ve won but if we should fail, how
then… What is courage now?
- Fellowship Going South by Leslie Fish sung by Julia Ecklar

Member in purple standing of the Mad Poet’s Society.
Trying to bring truth from beauty is Winterlion.
find at this winterlions’ page

greater than 1000Hz : DSP on audio; high-speed simulations; sampling from
some devices; networking transformations

Any realtime simulation IMHO. It is best to have the simulation work at
something like 35 - 400 simulation frames per second, and since current
machine easily can handle 400 or even more sfps, why not? Performance
counter in win32 gives sort of preciside time measurements AFAIK.

I’m not sure if a high-response (emphasis on fast response) system is
needed beyond 5ms (==5000 microseconds) for SDL… requirements beyond

I am not looking for fast response. It just happens that I need to be
able to measure things that can take less time than 1 ms. Computers
keep getting faster and faster, and there is a chance that I can run
some simulation 10000 times per seconds. And the simulation might need
timing information. Which is provided by performance counter (is it
available in linux in any form?). So I don’t see why SDL couldn’t let
me use as precise timing info as possible.

I use floating point data as well. Mostly doubles though.

– Timo Suoranta – @Timo_K_Suoranta

Might just be that it doesn’t sync to the retrace… :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 05 April 2001 11:21, Mattias Engdeg?rd wrote:

I got my program running at over 1000 fps. Guess what sdl timer
resolution thought about that…

I doubt you did, or you have a very interesting display (hardly a CRT)

Timo K Suoranta schrieb am 05 Apr 2001:

Any realtime simulation IMHO. It is best to have the simulation work at
something like 35 - 400 simulation frames per second, and since current
machine easily can handle 400 or even more sfps, why not? Performance
counter in win32 gives sort of preciside time measurements AFAIK.

I’m not sure if a high-response (emphasis on fast response) system is
needed beyond 5ms (==5000 microseconds) for SDL… requirements beyond

I am not looking for fast response. It just happens that I need to be
able to measure things that can take less time than 1 ms. Computers
keep getting faster and faster, and there is a chance that I can run
some simulation 10000 times per seconds. And the simulation might need
timing information. Which is provided by performance counter (is it
available in linux in any form?). So I don’t see why SDL couldn’t let
me use as precise timing info as possible.

I use floating point data as well. Mostly doubles though.

I might make sense to run your simulation n times per second, but you don’t
need to display it at more than 60 FPS. So you can do:

while(1) {
thistime = SDL_GetTicks();
while(lasttime < thistime) {
lasttime += 1 / SIMULATION_RATE;
runSimulation(lasttime);
/* thistime = SDL_GetTicks(); */
}
displayResult(lasttime);
}

This way your simulation runs at a constant rate.
For even more exact results, update the thistime after every calculation.
Of course, if your computer can’t keep up, you’ll never see a frame.

  • Andreas–
    Check out my 3D lightcycle game: http://www.gltron.org
    More than 100’000 Downloads of the last version (0.59)

I say the microsecond… reason is if your sampling differences between
frames of animation to adjust accuracy or quantity of movement/rotation/etc,
the accumulated results will more closely match the real frameframes of your
engine.

TuxRacer uses gettimeofday and that is of the microsecond accuracy, correct?
I know windows has multimedia timers, but a quick glance at the API I’m not
exactly sure if it can be used in the same manner as other timers… Unsure
though, I’d have to look into it more.

Matt> >For the physics simulation - updating object positions and rotations -

I need to measure the exact time spent in that frame. On fast machines
it turned out that SDL timer precision was not enough; simulation could
run so fast that the time spent in one animation step was not quite
precise enough, and resulted jerky animation.

My question remains: For what application is timekeeping better than
1/1000 s needed, and how high resolution is needed? microsecond?
nanosecond?

I’m not saying it isn’t needed; I just want clear quantified answers
to be able to design the interface better

I got my program running at over 1000 fps. Guess what sdl timer
resolution thought about that…

I doubt you did, or you have a very interesting display (hardly a CRT)

Okay, program wnet over 1000 fps, monitor didn’t. But my program
wasn’t clever enough to know anything about monitor.

Is there way with SDL to know about monitor refresh frequency anyway?

– Timo Suoranta – @Timo_K_Suoranta

I say the microsecond… reason is if your sampling differences between
frames of animation to adjust accuracy or quantity of movement/rotation/etc,
the accumulated results will more closely match the real frameframes of your
engine.

nobody would sample difference between frames that way