Knowing the maximum effective framerate

Hello,

I am writing because I have a good idea for a feature in SDL 2.0.

It would be great if an application could access the system’s current
screen refresh rate, so games would know how frequently to render their
next frame. If a program chooses the wrong refresh rate, rendered frames
will either be skipped at sometimes oscillating frequencies, or
frequently miss the deadline for being drawn, depending on whether the
chosen fps is greater or less than the system’s. This leads to uneven
pacing of frames, and causes what appears to be stuttering movement.

I’m guessing that many displays are set to 60 fps exactly, since mine
is, but that’s really just a guess. It could be much different on
embedded devices, or high-quality monitors. I’ve heard of monitors that
run at 75 Hz, not 60.

So what do you think? Would it be good to know the monitor’s current
refresh rate setting?

Thanks for a great library.
-Andrew

Andrew Engelbrecht writes:

Hello,

I am writing because I have a good idea for a feature in SDL 2.0.

It would be great if an application could access the system’s current
screen refresh rate, so games would know how frequently to render their
next frame. If a program chooses the wrong refresh rate, rendered frames
will either be skipped at sometimes oscillating frequencies, or
frequently miss the deadline for being drawn, depending on whether the
chosen fps is greater or less than the system’s. This leads to uneven
pacing of frames, and causes what appears to be stuttering movement.

I’m guessing that many displays are set to 60 fps exactly, since mine
is, but that’s really just a guess. It could be much different on
embedded devices, or high-quality monitors. I’ve heard of monitors that
run at 75 Hz, not 60.

I think TFTs are generaly at 60Hz. But with CRTs the frequency was quite
variable.

So what do you think? Would it be good to know the monitor’s current
refresh rate setting?

Thanks for a great library.
-Andrew

What you need is more than knowing the frame rate. Say you know the
frame rate is exactly 60 Hz and you manage to refresh your screen at
exactly 60 Hz too but you happen to refresh ist always when the display
is in the middle of displaying half the image. You will end up with a
disjointed image where the top half is from the old frame and the bottom
from the new frame.

So what you need is a way to wait for the graphics card being done
displaying the current frame and then flip to the new one. By just
waiting for that point in time multiple times you can also measure the
frame rate.

MfG
Goswin

Hello,

I am writing because I have a good idea for a feature in SDL 2.0.

It would be great if an application could access the system’s current
screen refresh rate, so games would know how frequently to render their
next frame. If a program chooses the wrong refresh rate, rendered frames
will either be skipped at sometimes oscillating frequencies, or
frequently miss the deadline for being drawn, depending on whether the
chosen fps is greater or less than the system’s. This leads to uneven
pacing of frames, and causes what appears to be stuttering movement.

I’m guessing that many displays are set to 60 fps exactly, since mine
is, but that’s really just a guess. It could be much different on
embedded devices, or high-quality monitors. I’ve heard of monitors that
run at 75 Hz, not 60.

So what do you think? Would it be good to know the monitor’s current
refresh rate setting?

Doesn’t seem useful. If your rendering takes 17msec instead of 16.7msec
due to system events (e.g. antivirus scan) then what do you do? What if it
takes 100msec? There isn’t any actionable items to go off of using maximum
refresh alone; you’re a lot more likely to degrade graphics features when
the frame time is above some threshold, e.g. 33.67ms. But then you’re
measuring and acting on actual frame rate, not maximum. It might be
useful to allow users to select some resolution at two different refresh
rates, but beyond that, the knowledge of 60 vs 72 vs XX isn’t something I
can see people making useful optimizations with. Most likely, they’ll try
to be clever and insert short waits and screw up everything. In theory, if
you ran on a single-tasking embedded system, you could use this information
to ensure perfect synchronization without any hardware mechanism like
v-sync, but most of us are using off-the-shelf graphics cards and
multitasking OSes where timing granularity is closer to 1-10ms than
"real-time".

PatrickOn Sat, May 5, 2012 at 8:32 AM, Andrew Engelbrecht wrote:

Thanks for a great library.
-Andrew


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hello,

I am writing because I have a good idea for a feature in SDL 2.0.

It would be great if an application could access the system's current
screen refresh rate, so games would know how frequently to render their
next frame. If a program chooses the wrong refresh rate, rendered frames
will either be skipped at sometimes oscillating frequencies, or
frequently miss the deadline for being drawn, depending on whether the
chosen fps is greater or less than the system's. This leads to uneven
pacing of frames, and causes what appears to be stuttering movement.

I'm guessing that many displays are set to 60 fps exactly, since mine
is, but that's really just a guess. It could be much different on
embedded devices, or high-quality monitors. I've heard of monitors that
run at 75 Hz, not 60.

So what do you think? Would it be good to know the monitor's current
refresh rate setting?

Doesn’t seem useful. If your rendering takes 17msec instead of 16.7msec
due to system events (e.g. antivirus scan) then what do you do? What if
it takes 100msec? There isn’t any actionable items to go off of using
maximum refresh alone; you’re a lot more likely to degrade graphics
features when the frame time is above some threshold, e.g. 33.67ms. But
then you’re measuring and acting on /actual/ frame rate, not /maximum/.
It might be useful to allow users to select some resolution at two
different refresh rates, but beyond that, the knowledge of 60 vs 72 vs
XX isn’t something I can see people making useful optimizations with.
Most likely, they’ll try to be clever and insert short waits and screw
up everything. In theory, if you ran on a single-tasking embedded
system, you could use this information to ensure perfect synchronization
without any hardware mechanism like v-sync, but most of us are using
off-the-shelf graphics cards and multitasking OSes where timing
granularity is closer to 1-10ms than “real-time”.

Many games use frame limiters. One effect they have is not maxing out
the GPU. With VSync off and no frame limiter, you might be doing 500FPS
while you only need 60 or 75, and the GPU is working at full load while
it wouldn’t have to. So knowing the current screen refresh rate can be
useful.On 05/05/12 20:44, Patrick Baggett wrote:

On Sat, May 5, 2012 at 8:32 AM, Andrew Engelbrecht <sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org>> wrote:

Hello,

I am writing because I have a good idea for a feature in SDL 2.0.

It would be great if an application could access the system’s current
screen refresh rate, so games would know how frequently to render their
next frame. If a program chooses the wrong refresh rate, rendered
frames
will either be skipped at sometimes oscillating frequencies, or
frequently miss the deadline for being drawn, depending on whether the
chosen fps is greater or less than the system’s. This leads to uneven
pacing of frames, and causes what appears to be stuttering movement.

I’m guessing that many displays are set to 60 fps exactly, since mine
is, but that’s really just a guess. It could be much different on
embedded devices, or high-quality monitors. I’ve heard of monitors that
run at 75 Hz, not 60.

So what do you think? Would it be good to know the monitor’s current
refresh rate setting?

Doesn’t seem useful. If your rendering takes 17msec instead of 16.7msec
due to system events (e.g. antivirus scan) then what do you do? What if
it takes 100msec? There isn’t any actionable items to go off of using
maximum refresh alone; you’re a lot more likely to degrade graphics
features when the frame time is above some threshold, e.g. 33.67ms. But
then you’re measuring and acting on /actual/ frame rate, not /maximum/.

It might be useful to allow users to select some resolution at two
different refresh rates, but beyond that, the knowledge of 60 vs 72 vs
XX isn’t something I can see people making useful optimizations with.
Most likely, they’ll try to be clever and insert short waits and screw
up everything. In theory, if you ran on a single-tasking embedded
system, you could use this information to ensure perfect synchronization
without any hardware mechanism like v-sync, but most of us are using
off-the-shelf graphics cards and multitasking OSes where timing
granularity is closer to 1-10ms than “real-time”.

Many games use frame limiters. One effect they have is not maxing out the
GPU. With VSync off and no frame limiter, you might be doing 500FPS while
you only need 60 or 75, and the GPU is working at full load while it
wouldn’t have to. So knowing the current screen refresh rate can be useful.

If you can render 500 FPS, then v-sync should be on. Turning off v-sync is
a performance optimization, not visual quality.On Sat, May 5, 2012 at 12:56 PM, Nikos Chantziaras wrote:
On 05/05/12 20:44, Patrick Baggett wrote:

On Sat, May 5, 2012 at 8:32 AM, Andrew Engelbrecht <sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org**>> wrote:

_____________**
SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/**listinfo.cgi/sdl-libsdl.orghttp://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

        Hello,

        I am writing because I have a good idea for a feature in SDL
    2.0.

        It would be great if an application could access the
    system's current
        screen refresh rate, so games would know how frequently to
    render their
        next frame. If a program chooses the wrong refresh rate,
    rendered frames
        will either be skipped at sometimes oscillating frequencies, or
        frequently miss the deadline for being drawn, depending on
    whether the
        chosen fps is greater or less than the system's. This leads
    to uneven
        pacing of frames, and causes what appears to be stuttering
    movement.

        I'm guessing that many displays are set to 60 fps exactly,
    since mine
        is, but that's really just a guess. It could be much
    different on
        embedded devices, or high-quality monitors. I've heard of
    monitors that
        run at 75 Hz, not 60.

        So what do you think? Would it be good to know the monitor's
    current
        refresh rate setting?

    Doesn't seem useful. If your rendering takes 17msec instead of
    16.7msec
    due to system events (e.g. antivirus scan) then what do you do?
    What if
    it takes 100msec? There isn't any actionable items to go off of
    using
    maximum refresh alone; you're a lot more likely to degrade graphics
    features when the frame time is above some threshold, e.g.
    33.67ms. But
    then you're measuring and acting on /actual/ frame rate, not
    /maximum/.

    It might be useful to allow users to select some resolution at two
    different refresh rates, but beyond that, the knowledge of 60 vs
    72 vs
    XX isn't something I can see people making useful optimizations
    with.
    Most likely, they'll try to be clever and insert short waits and
    screw
    up everything. In theory, if you ran on a single-tasking embedded
    system, you could use this information to ensure perfect
    synchronization
    without any hardware mechanism like v-sync, but most of us are using
    off-the-shelf graphics cards and multitasking OSes where timing
    granularity is closer to 1-10ms than "real-time".


Many games use frame limiters.  One effect they have is not maxing
out the GPU.  With VSync off and no frame limiter, you might be
doing 500FPS while you only need 60 or 75, and the GPU is working at
full load while it wouldn't have to.  So knowing the current screen
refresh rate can be useful.

If you can render 500 FPS, then v-sync should be on. Turning off v-sync
is a performance optimization, not visual quality.

Turning off vsync is also a latency optimization. It’s interesting
though that in most games where the controls become “floaty” when you
enable vsync, they become extremely responsive again with vsync
enabled but with a frame limiter applied, but only if the value for
the FPS limit is not set over the refresh rate value.

So I still think it’s useful :slight_smile: For example, the recent new feature of
NVidia’s Windows driver, where you can set a frame limiter in the driver
itself, has solved all input latency (I hate mouse lag) problems when
VSync enabled.On 05/05/12 21:13, Patrick Baggett wrote:

On Sat, May 5, 2012 at 12:56 PM, Nikos Chantziaras <@Nikos_Chantziaras mailto:Nikos_Chantziaras> wrote:
On 05/05/12 20:44, Patrick Baggett wrote:
On Sat, May 5, 2012 at 8:32 AM, Andrew Engelbrecht <sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org> <mailto:sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org> wrote:

       Hello,

       I am writing because I have a good idea for a feature in SDL
   2.0.

       It would be great if an application could access the
   system's current
       screen refresh rate, so games would know how frequently to
   render their
       next frame. If a program chooses the wrong refresh rate,
   rendered frames
       will either be skipped at sometimes oscillating frequencies, or
       frequently miss the deadline for being drawn, depending on
   whether the
       chosen fps is greater or less than the system's. This leads
   to uneven
       pacing of frames, and causes what appears to be stuttering
   movement.

       I'm guessing that many displays are set to 60 fps exactly,
   since mine
       is, but that's really just a guess. It could be much
   different on
       embedded devices, or high-quality monitors. I've heard of
   monitors that
       run at 75 Hz, not 60.

       So what do you think? Would it be good to know the monitor's
   current
       refresh rate setting?

   Doesn't seem useful. If your rendering takes 17msec instead of
   16.7msec
   due to system events (e.g. antivirus scan) then what do you do?
   What if
   it takes 100msec? There isn't any actionable items to go off of
   using
   maximum refresh alone; you're a lot more likely to degrade graphics
   features when the frame time is above some threshold, e.g.
   33.67ms. But
   then you're measuring and acting on /actual/ frame rate, not
   /maximum/.

   It might be useful to allow users to select some resolution at two
   different refresh rates, but beyond that, the knowledge of 60 vs
   72 vs
   XX isn't something I can see people making useful optimizations
   with.
   Most likely, they'll try to be clever and insert short waits and
   screw
   up everything. In theory, if you ran on a single-tasking embedded
   system, you could use this information to ensure perfect
   synchronization
   without any hardware mechanism like v-sync, but most of us are

using
off-the-shelf graphics cards and multitasking OSes where timing
granularity is closer to 1-10ms than “real-time”.

Many games use frame limiters. One effect they have is not maxing
out the GPU. With VSync off and no frame limiter, you might be
doing 500FPS while you only need 60 or 75, and the GPU is working at
full load while it wouldn’t have to. So knowing the current screen
refresh rate can be useful.

If you can render 500 FPS, then v-sync should be on. Turning off v-sync
is a performance optimization, not visual quality.

Turning off vsync is also a latency optimization. It’s interesting though
that in most games where the controls become “floaty” when you enable
vsync, they become extremely responsive again with vsync enabled but with
a frame limiter applied, but only if the value for the FPS limit is not
set over the refresh rate value.

So I still think it’s useful :slight_smile: For example, the recent new feature of
NVidia’s Windows driver, where you can set a frame limiter in the driver
itself, has solved all input latency (I hate mouse lag) problems when
VSync enabled.

input and entangled render loop/event queues. Framerate limiting is
probably a good idea in general, sure, but aren’t the rates usually closer
to like 100 Hz (i.e. some fixed value much greater than the monitor’s
framerate)? If so, then it doesn’t seem to require knowledge of the
monitor’s display rate – which is what we’re discussing.

I guess I wouldn’t *oppose *the inclusion, but I think you’d be hard
pressed to find a true usage of it that is actually correct. My 2c.

PatrickOn Sat, May 5, 2012 at 1:20 PM, Nikos Chantziaras wrote:

On 05/05/12 21:13, Patrick Baggett wrote:

On Sat, May 5, 2012 at 12:56 PM, Nikos Chantziaras <realnc at gmail.com <mailto:realnc at gmail.com>> wrote:
On 05/05/12 20:44, Patrick Baggett wrote:
On Sat, May 5, 2012 at 8:32 AM, Andrew Engelbrecht <sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org**> <mailto:sudoman at ninthfloor.org <mailto:sudoman at ninthfloor.org**> wrote:
From my experience, these issues are generally due to polling vs buffered

_____________**
SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/**listinfo.cgi/sdl-libsdl.orghttp://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

   Doesn't seem useful. If your rendering takes 17msec instead of
   16.7msec
   due to system events (e.g. antivirus scan) then what do you do?

In that case, the programmer makes a decision about whether to cap the
fps at all. I address that situation further below.

On the other side of the effective (max) fps/refresh rate barrier, it
matters. The game I’m writing has a quick render speed, so I can time
how long it takes to render, sleep, then get 60 fps within under 0.1% of
that rate. (yes, i mean percent.) Let’s say I decide to render faster
than the max fps, in order to be conservative with my assumptions about
the monitor’s limit. That would be analogous to say 80 Hz on this
monitor. I just tested my game at that render speed, and you can really
tell the difference when the ‘space ship’ turns. (right now it’s just a
triangle.) Basically, you see an uneven distribution of points over the
arc, which stands out since the eye blends them together into a
continuous motion. The clumping and segregation of angles looks
unnatural when blended.

Of course, I could assume 60 Hz, but how could I know I’m not
overshooting on an embedded device?

To answer your question above, if the render time is barely over 16.6
ms, it might make sense for the developer to draw every other frame, or
use no cap. But that would be his/her choice. I tested my game at fps
lower than 60, but honestly, my brain didn’t notice clumping issues like
it did at 80 Hz, just the normal choppiness of low fps. So maybe knowing
the max fps would be useful for setting a limit for quick render jobs,
but would not be needed for knowing what is maxfps/2, maxfps/3,
maxfps/4… which would help give an even distribution of drawn frames
at lower rates. Still, it could be the developer’s option.

   It might be useful to allow users to select some resolution at two
   different refresh rates, but beyond that, the knowledge of 60 vs
   72 vs
   XX isn't something I can see people making useful optimizations
   with.

Setting the monitor refresh rate would be nice too.

   Most likely, they'll try to be clever and insert short waits and
   screw
   up everything. In theory, if you ran on a single-tasking embedded
   system, you could use this information to ensure perfect
   synchronization
   without any hardware mechanism like v-sync, but most of us are

using
off-the-shelf graphics cards and multitasking OSes where timing
granularity is closer to 1-10ms than “real-time”.

Actually, vsync doesn’t seem to be working on SDL/SWsurface/X11, if
tearing is any evidence towards that. (I can only get a SWsurface or
OpenGL, not a HWsurface.) Thankfully, wayland should be fixing the
tearing issue, even for SWsurfaces, but it is a ways off. So it’s not
available for every system, without diving into OpenGL.

Many games use frame limiters. One effect they have is not maxing
out the GPU. With VSync off and no frame limiter, you might be
doing 500FPS while you only need 60 or 75, and the GPU is working at
full load while it wouldn’t have to. So knowing the current screen
refresh rate can be useful.

Thanks,
-AndrewOn 05/05/2012 05:15 PM, Patrick Baggett wrote:

On Sat, May 5, 2012 at 1:20 PM, Nikos Chantziaras wrote:

On 05/05/12 21:13, Patrick Baggett wrote:

On Sat, May 5, 2012 at 12:56 PM, Nikos Chantziaras <realnc at gmail.com <mailto:realnc at gmail.com>> wrote:
On 05/05/12 20:44, Patrick Baggett wrote:

Your idea is a good idea. I find useful to know if refresh rate is 59hz or 60hz, for example (there are screens like that). It is useful to adjust resource consumption and to try to show as smooth as possible moving images.

The thing is, I (still) haven’t tried it by myself, but I think it’s already done :slight_smile:

Have you checked this? http://wiki.libsdl.org/moin.cgi/SDL_GetDesktopDisplayMode?highlight=(\bCategoryVideo\b)|(CategoryEnum)|(CategoryStruct)|(SGFunctions)

Is SDL_GetDesktopDisplayMode() what do you want? It returns this structure:

http://wiki.libsdl.org/moin.cgi/SDL_DisplayMode

filled with useful data, refresh rate among it.

Note that this is not the case. Most monitors are 59.94Hz. 60 is just
a rounded value. Same for 59, since Windows 7 doesn’t round to the
nearest integer but actually takes into account the value in the
displays DDC information, which says 59.94. The graphics driver further
exposes a 60Hz mode for backwards compatibility, and perhaps to avoid
user confusion. But both modes, 59 and 60, are exactly the same.On 06/05/12 18:29, Manuel Montoto wrote:

Your idea is a good idea. I find useful to know if refresh rate is 59hz
or 60hz, for example (there are screens like that).

Very interesting indeed :slight_smile: