Doesn't seem useful. If your rendering takes 17msec instead of
due to system events (e.g. antivirus scan) then what do you do?
In that case, the programmer makes a decision about whether to cap the
fps at all. I address that situation further below.
On the other side of the effective (max) fps/refresh rate barrier, it
matters. The game I’m writing has a quick render speed, so I can time
how long it takes to render, sleep, then get 60 fps within under 0.1% of
that rate. (yes, i mean percent.) Let’s say I decide to render faster
than the max fps, in order to be conservative with my assumptions about
the monitor’s limit. That would be analogous to say 80 Hz on this
monitor. I just tested my game at that render speed, and you can really
tell the difference when the ‘space ship’ turns. (right now it’s just a
triangle.) Basically, you see an uneven distribution of points over the
arc, which stands out since the eye blends them together into a
continuous motion. The clumping and segregation of angles looks
unnatural when blended.
Of course, I could assume 60 Hz, but how could I know I’m not
overshooting on an embedded device?
To answer your question above, if the render time is barely over 16.6
ms, it might make sense for the developer to draw every other frame, or
use no cap. But that would be his/her choice. I tested my game at fps
lower than 60, but honestly, my brain didn’t notice clumping issues like
it did at 80 Hz, just the normal choppiness of low fps. So maybe knowing
the max fps would be useful for setting a limit for quick render jobs,
but would not be needed for knowing what is maxfps/2, maxfps/3,
maxfps/4… which would help give an even distribution of drawn frames
at lower rates. Still, it could be the developer’s option.
It might be useful to allow users to select some resolution at two
different refresh rates, but beyond that, the knowledge of 60 vs
XX isn't something I can see people making useful optimizations
Setting the monitor refresh rate would be nice too.
Most likely, they'll try to be clever and insert short waits and
up everything. In theory, if you ran on a single-tasking embedded
system, you could use this information to ensure perfect
without any hardware mechanism like v-sync, but most of us are
off-the-shelf graphics cards and multitasking OSes where timing
granularity is closer to 1-10ms than “real-time”.
Actually, vsync doesn’t seem to be working on SDL/SWsurface/X11, if
tearing is any evidence towards that. (I can only get a SWsurface or
OpenGL, not a HWsurface.) Thankfully, wayland should be fixing the
tearing issue, even for SWsurfaces, but it is a ways off. So it’s not
available for every system, without diving into OpenGL.
Many games use frame limiters. One effect they have is not maxing
out the GPU. With VSync off and no frame limiter, you might be
doing 500FPS while you only need 60 or 75, and the GPU is working at
full load while it wouldn’t have to. So knowing the current screen
refresh rate can be useful.
-AndrewOn 05/05/2012 05:15 PM, Patrick Baggett wrote:
On Sat, May 5, 2012 at 1:20 PM, Nikos Chantziaras wrote:
On 05/05/12 21:13, Patrick Baggett wrote:
On Sat, May 5, 2012 at 12:56 PM, Nikos Chantziaras <realnc at gmail.com <mailto:realnc at gmail.com>> wrote:
On 05/05/12 20:44, Patrick Baggett wrote: