A few of my own subjective observations as I have access to a few somewhat
exotic displays and a history in the Quake competitive multiplayer scene…
Tearing is most apparent when your redraw rate is near the refresh rate -
if it is substantially higher (3x or higher) or substantially lower (less
than half), it’s not particularly noticeable.
Yes, exactly. It’s just like waves and producing a “beat” between them where
the high point in the wave represents a synchronized point. If you have a
60Hz display and you update at 50, then you have a bunch of frames that
don’t match the timing. The display updates every 16.67ms, you generate a
frame every 20ms, so if you started at the same time, you’d get:
1/60 2/60 3/60 4/60
o-----o-----o-----o-----o
o------o------o------o------o
1/50 2/50 3/50 4/50
The would get one frame without tearing when n*(1000/50) = m*(1000/60), when
m,n are integers, i.e their least common multiple. In this case, you’d find
it at time = 100ms, or n*(1000/50) = m*(1000/60) = 100; thus n = 5 and m =
6. So every 5th frame drawn by the graphics card is actually synchronized
with the monitor.
The closer the frame rate (e.g. 59 vs 60), the larger they aren’t
synchronized. In that example, their LCM is at time 282.4msec, so about
every 17 frames (only sync’d 4-5 times per second!)
Film cameras rely on a series of “snapshots”, distinct complete frames,
which correlate rather well with LCD/Plasma/DLP display technologies, this
is effectively a “strobe image” characteristic (blasting the entire frame
out at once, or at least causing it to morph over time into the new frame
all at once in the case of LCD).
As a side note, film projectors use an intentional flicker of the shutter
to hide the times when it advances frames vs the times when it does not
(each frame is shown with two or more black periods inserted, one of which
is hiding the advancement of the film, while the others exist only for
aesthetic reasons).
Old camcorders and many cellphones are known for a “motion skew” effect,
because the discharge of the CCD or CMOS pixels is being performed during
scanout (the shutter is open at the time), matching the signal rate being
produced, such “motion skew” is unobvious when directly displayed on a
similar CRT/LED display at the same refresh rate where the timing of pixels
lighting up (being energized) is identical to the timing of the recording
itself, this happens chiefly because these cameras have no shutter (on a
highend DSLR you do not get much motion skew because a mechanical shutter is
used when recording video and the scanout is performed “in the dark”).
As a case example, Quake3 is considered best played on a CRT, which is
partly to do with its 125fps cap, on an LCD this is far too near the refresh
rate and can demonstrate obvious tearing, enabling vsync fixes this problem
on an LCD, but isn’t particularly necessary on a CRT (likely due to the
inherent weirdness of the scanout itself), unfortunately turning on vsync
causes the player physics to change and the input to become less accurately
timed…
rant /
This is because Quake3, like many games, ties the input update speed
directly to the graphics rendering. In such a case, the app usually polls
the state of the mouse/keyboard and since multiple changes might have
already occured between “now” and the last poll, i.e: input is lost. I’ve
been there, I’ve done that, and I got wiser since then. This is a solved
problem: buffered input and/or input collection on a different thread
[hack]. On the other hand, if a program is broken such that physics behaves
dramatically different due to higher/lower FPS, then the problem as less to
do with FPS and more to with WTF. I don’t think anyone should use the
previous examples as a guide for new behavior. Physics + fixed time step
where timestep =/= FPS is also a solved problem. Since this is an SDL
development mailing list, I’m assuming people here are generally are
developing games and (hopefully!) won’t make those mistakes. Take note!
The biggest argument I’ve seen among hardcore Quake players against vsync
is the input timing accuracy, it’s not that it makes the game look smoother
to render at higher redraw rates than the refresh, it’s that it feels more
responsive, often this is to do with the player physics behaving differently
at higher framerates (literally one can make some jumps and maneuvers at a
high framerate that one can not make at a lower framerate, this falls into
the category of physics exploits of course), or simply to do with the input
gathering being finer grained in such a situation.
I’ve heard this too. Someone on the IOQuake3 mailing list complained about
capped FPS making some jump impossible. It makes me want to facepalm.
As far as smoothness of motion (the observer situation), the higher the
refresh rate the better, and vsync should always be on, motion blur can be
simulated by superimposing multiple frames (accumulation buffer / temporal
antialiasing), or using advanced screenspace effects based on object motion
vectors stored during rendering (Object Space Motion Blur), or simply a
ghost of the previous frame (this is actually an inherent characteristic of
an LCD display, so it is quite amusing that developers often apply it as an
intentional effect as well).
As far as input latency (the hardcore player situation), the higher the
redraw rate the better, and vsync should be off, simply because the games
often behave better at extremely high framerates, not to do with the actual
visuals - this characteristic will not appear in screenshots or ingame video
capture, so it is hard to quantify objectively.
IMO, this is just bad advice. Redraw rate =/= input update rate. It can be
hard to separate the two if you use polling, which is why buffered input was
invented – so you don’t lose input data that is generated at a higher rate
than your FPS. It certainly isn’t that people are reacting faster due to
higher FPS. Humans don’t generally see > 60 FPS as Forest mentioned.
It’s super easy to tell who does polling if there is an in-game cursor,
artificially limit the FPS to 5, then move your mouse. If the cursor moves
in a way that is dramatically different for the same amount of motion when
running at >> 5 FPS, then it probably polls.
It is common knowledge that hardcore players refuse to use most 60hz LCD
monitors, but many find a 120hz LCD monitor to be quite acceptable, such
120hz LCD monitors (and a few 60hz ones) lack “scaler” hardware and thus can
only run at a single native resolution (direct scanout) as such hardware is
associated with higher input latency (2-3 refreshes delay) which is
considered unacceptable by hardcore players.
3 cheers for hardware vendors capitalizing on crappy software. When the
software loop caps at 60 FPS due to vsync, make a monitor with a higher
vsync to combat that! Does anyone here think that is an appropriate
solution? People are tricked into thinking more Hz = better games = win.
Again, facepalm, for all parties. You know I would buy a 120 Hz CRT/LCD?
Quad buffered stereoscopic rendering at 60 Hz per eye + shutter glasses. It
certainly isn’t because I can see so much more rich detail at 120Hz than at
60Hz. /rantOn Thu, Apr 21, 2011 at 7:06 PM, Forest Hale wrote: