Framerate counter

Hi there!

I’ve tried to make a framerate counter as wel as a framerate limiter (so the
game doesn’t run faster than say, 50fps), but i’ve failed in doing so. I was
wondering if anyone has already done this before, and would care to share the
source of it with me.

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

If it’s been less than 1000 / fps milliseconds, then I sleep
(SDL_Delay()) the difference.

-bill!

Hi there!

I’ve tried to make a framerate counter as wel as a framerate limiter (so the
game doesn’t run faster than say, 50fps), but i’ve failed in doing so. I was
wondering if anyone has already done this before, and would care to share the
source of it with me.

Thnx in advance,

  • Remenic.

Look at my animation tutorials at http://sdldoc.sourceforge.net/tne/

  • AndreasOn Wed, Apr 04, 2001 at 02:53:07PM +0200, Richard ‘Remenic’ Stellingwerff wrote:

Hi there!

I’ve tried to make a framerate counter as wel as a framerate limiter (so the
game doesn’t run faster than say, 50fps), but i’ve failed in doing so. I was
wondering if anyone has already done this before, and would care to share the
source of it with me.

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

although gettimeofday() can be more precise than SDL_GetTicks(), the
latter is more portable (and internally uses the former if needed)

If it’s been less than 1000 / fps milliseconds, then I sleep
(SDL_Delay()) the difference.

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

Profiling?

  • AndreasOn Wed, Apr 04, 2001 at 12:28:55PM +0200, Mattias Engdeg?rd wrote:

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

although gettimeofday() can be more precise than SDL_GetTicks(), the
latter is more portable (and internally uses the former if needed)

Whoops! That’s what I meant. Sorry, been doing straight xlib lately
(SDL hasn’t been ported to the Agenda… YET :slight_smile: )

-bill!

I’ve tried to make a framerate counter as wel as a framerate limiter (so the
game doesn’t run faster than say, 50fps), but i’ve failed in doing so. I was
wondering if anyone has already done this before, and would care to share the
source of it with me.

I use simulation timer. The timer is SDL timer, which maintains time for
my application and runs simulation. The main thread redraws display and
maintains keyboard state; keyboard state is interpreted in the simulation
timer. The idea is that simulation timer runs at fixed and faster rate
than display and event refresh thread.

By maintaining time for my application I mean quering performance counter
in WIN32 systems and gettimeofday on others - I decided not to rely on
SDL timer being precise enough.

The main thread, which in loop redraws display and pools events, it
also calculates fps. It does it in the following way:

frames++;
if(  (sys_time-fps_time) > 1000.0  ){
	fps = (float)(frames)*1000.0f / (sys_time - fps_time);
	frames   = 0;
	fps_time = sys_time;
}

What I do is count how many frames are rendered in 1000 milliseconds.

– Timo Suoranta – @Timo_K_Suoranta

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

I found out that SDL time functions were not precise, so
I wrote my own.

http://glElite.sourceforge.net/doc/Timer_cpp-source.html

Profiling?

Does anyone know how profiling in Visual Studio should work?
No matter what I do, the menu entry for Profile… is not
enabled :I

– Timo Suoranta – @Timo_K_Suoranta

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

Profiling?

isn’t a strong enough argument for a cross-platform solution since it’s
just used in development. platform-specific methods should do nicely

By maintaining time for my application I mean quering performance counter
in WIN32 systems and gettimeofday on others - I decided not to rely on
SDL timer being precise enough.

please state exactly what precision you need (and why), and we can
make an attempt to provide that in a platform-independent way so people
don’t need to resort to unportable hacks like that you mention

Profiling?

Does anyone know how profiling in Visual Studio should work?
No matter what I do, the menu entry for Profile… is not
enabled :I

Just a wild guess: What version of visual C++ do you have?
I think profiling is only enabled in the professional/enterprise/whatever
edition.

  • Andreas

Are you building with profiling enabled?
Is Project->Settings->Link->Enable Profiling checked?

Kovacs> ----- Original Message -----

From: marvin@dataway.ch (Andreas Umbach)
To:
Sent: tresdiena, 2001. gada 4. aprilis 15:07
Subject: Re: [SDL] framerate counter

Profiling?

Does anyone know how profiling in Visual Studio should work?
No matter what I do, the menu entry for Profile… is not
enabled :I

Just a wild guess: What version of visual C++ do you have?
I think profiling is only enabled in the professional/enterprise/whatever
edition.

  • Andreas

Mattias Engdeg?rd wrote:

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

If it’s been less than 1000 / fps milliseconds, then I sleep
(SDL_Delay()) the difference.

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

My understanding, though, has been that SDL_Delay() is not
necessarily stable w.r.t. to system load changes – that
the time requested is a minimum time. This was the
rationale for creating the sge_Delay() function, according
to the documentation (sge_Delay() solves this by profiling
to estimate a good minimum time, uses SDL_Delay() to get
almost there and then burns off the extra in a busy-wait
loop). But even sge_Delay() can be foiled by rapid changes
in system load.

Also, 25 fps is only 40ms and 50 is 20ms, so, with a 10ms
resolution timer, you could have a lot of aliasing in the
frame rate (especially, if, like me, you’re probably going
to be running close to using up the available time on the
low-end platforms anyway).

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

Of course, those are wasted CPU cycles.

I haven’t actually implemented this yet, so if there’s
any reason I shouldn’t, this would be a good time for
me to find out about it. :slight_smile:

Thanks!–
Terry Hancock
@Terry_Hancock

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

Of course, those are wasted CPU cycles.

I haven’t actually implemented this yet, so if there’s
any reason I shouldn’t, this would be a good time for
me to find out about it. :slight_smile:

Well, CPU load goes through the roof. This is how I used to
do it until people started complaining about the program taking
up major CPU time.

(Nobody I asked knew about “usleep()” for some reason…
All I knew about was “sleep()” and “gettimeofday()” :^/ )

-bill!

I have had problems with this in my code. I have various things that
animate at different framerates. The only way I have been able to get
my game to run smooth is to use a select() call to sleep for the amount
of time between frames. Since I have some threading, I don’t want to
use the busy loop, I would rather my threads do the work. I use a
select() delay in one thread that updates the states of the objects on
the screen at a constant rate, and another that handles the graphical
output. Since the state if the objects is changed independant of the
screen refresh it works well… except that if the screen looks better
when updated at a constant rate. This is why I use select in my screen
refresh code as well.

This is one reason my code only works on Windows and Linux, but with
SDL_Delay so coarse I have no choice.

Jamie Best

Terry Hancock wrote:>

Mattias Engdeg?rd wrote:

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

If it’s been less than 1000 / fps milliseconds, then I sleep
(SDL_Delay()) the difference.

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

My understanding, though, has been that SDL_Delay() is not
necessarily stable w.r.t. to system load changes – that
the time requested is a minimum time. This was the
rationale for creating the sge_Delay() function, according
to the documentation (sge_Delay() solves this by profiling
to estimate a good minimum time, uses SDL_Delay() to get
almost there and then burns off the extra in a busy-wait
loop). But even sge_Delay() can be foiled by rapid changes
in system load.

Also, 25 fps is only 40ms and 50 is 20ms, so, with a 10ms
resolution timer, you could have a lot of aliasing in the
frame rate (especially, if, like me, you’re probably going
to be running close to using up the available time on the
low-end platforms anyway).

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

Of course, those are wasted CPU cycles.

I haven’t actually implemented this yet, so if there’s
any reason I shouldn’t, this would be a good time for
me to find out about it. :slight_smile:

Thanks!

Terry Hancock
hancock at earthlink.net

Do you mean sleeping for less than 10 ms, or higher resolution than 1 ms?

In the former case, it might be nice to have a way of knowing/setting whether
or not SDL will/should busy-wait if there’s no other way of dealing with less
than 10 ms.

Now, the “setting”+“should” alternative isn’t very nice anyway. Probably a
better idea to just be able to get info on the scheduling timing accuracy,
and then just do the busy-waiting explicitly. That way, you don’t hide nasty
hacks behind the API.

Anyway, my experience (with “normal” platforms) so far has been that higher
accuracy than 1 ms is only interesting from a statistical POV, as in the
retrace PLL hack I’ve been playing with. That code expects to miss most of
the retraces, and only cares about getting a fairly accurate timestamp on the
ones it does hit. Scheduling accuracy doesn’t have to be (and can’t be, on
most platforms) better than about half a video frame period.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Wednesday 04 April 2001 12:28, Mattias Engdeg?rd wrote:

I usually find out what time it is at the beginning of my loop
(gettimeofday()), then what time it is at the end of my loop.

although gettimeofday() can be more precise than SDL_GetTicks(), the
latter is more portable (and internally uses the former if needed)

If it’s been less than 1000 / fps milliseconds, then I sleep
(SDL_Delay()) the difference.

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

Yeah, but count on getting lots of “noise” in the data. (However, getting
timestamps is a lot more reliable than trying to schedule with accurate
timing on most platforms.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Wednesday 04 April 2001 12:44, Andreas Umbach wrote:

On Wed, Apr 04, 2001 at 12:28:55PM +0200, Mattias Engdeg?rd wrote:

since SDL_Delay() typically has ~10 ms granularity, SDL_GetTicks()
should suffice for this. We could add an API for higher-resolution timing
but I haven’t seen a really compelling argument for it yet

Profiling?

This is one reason my code only works on Windows and Linux, but with
SDL_Delay so coarse I have no choice.

Foo. SDL_Delay is implemented on UNIX using select()
The timer resolution is based on the kernel scheduling resolution, not
anything inherent in the calls being made. usleep() has the same resolution,
and is implemented with select() as well.

From the nanosleep man page:

   The current implementation of nanosleep is  based  on  the
   normal  kernel  timer mechanism, which has a resolution of
   1/HZ s (i.e, 10 ms on Linux/i386 and 1 ms on Linux/Alpha).
   Therefore, nanosleep pauses always for at least the speci-
   fied time, however it can take up to  10  ms  longer  than
   specified  until  the  process becomes runnable again. For
   the same reason, the value returned in case of a delivered
   signal  in *rem is usually rounded to the next larger mul-
   tiple of 1/HZ s.

David, you don’t need to reply. :slight_smile:

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

I have had problems with this in my code. I have various things that
animate at different framerates. The only way I have been able to get
my game to run smooth is to use a select() call to sleep for the amount
of time between frames. […]

This is one reason my code only works on Windows and Linux, but with
SDL_Delay so coarse I have no choice.

SDL_Delay() should be no coarser than select(). We use select() for
delays on most unixy platforms (or nanosleep(), but that’s usually
identical or slightly better)

[…]

Considering all that, I thought it might be simpler and
safer in my app to just burn off the time in a busy-wait
polling SDL_GetTicks(). This way I should come close to
catching the falling edge and can run at even multiples
of 10ms without aliasing (?).

Of course, those are wasted CPU cycles.

I haven’t actually implemented this yet, so if there’s
any reason I shouldn’t, this would be a good time for
me to find out about it. :slight_smile:

The problem (on some platforms at least) is that the scheduler will see your
application as a hard working CPU hog, that needs to be preemptively
scheduled out occasionally not to freeze the system. This is where you can
lose control entirely; you’ll be robbed of the CPU for an undefined amount of
time, with no way of getting back in before the scheduler decides the other
tasks have had enough time to stay alive…

I’ve tried similar hacks on Win32 and Linux, but it seems to either make no
difference, or make things worse. (And the X server being forced to
participate in the hogging doesn’t exactly make things easier or more solid.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Wednesday 04 April 2001 21:50, Terry Hancock wrote: