Force blocking wait for vsync in OpenGL?

Hiya,

Is there a way to block until the next vsync when the hardware supports it? It
appears that it’s possible to force OpenGL to syncronise its refreshes with
vsyncs, but not the code itself… I tried calling glFlush(), but it just
returns instantaneously if there’s nothing queued.

I find it hard to believe that synchronising animation with the display in this
way is impossible.

Heya,

check this out:

http://olofson.net/examples.html

go there and download the example “pig-1.0.tar.gz”. It shows how to make fixed rate game logic (:>

From: Giles Constant
Date: 2004/10/25 Mon PM 06:35:05 EDT
To: sdl at libsdl.org
Subject: [SDL] Force blocking wait for vsync in OpenGL?

Hiya,

Is there a way to block until the next vsync when the hardware supports it? It
appears that it’s possible to force OpenGL to syncronise its refreshes with
vsyncs, but not the code itself… I tried calling glFlush(), but it just
returns instantaneously if there’s nothing queued.

I find it hard to believe that synchronising animation with the display in this
way is impossible.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

The only typical way to sync refresh with vsync but not code is triple-
buffering. Unfortuantely, almost no OpenGL implementations support it
(neither Win32 nor X do).

I used to be annoyed by that. However, I’ve found that double-buffering
is preferable for many applications. Triple-buffering means your rendering
thread will render as fast as it can, using all available CPU.

Some schedulers have internal concepts like “processes that continually use
all CPU they’re given are CPU hogs; give them their CPU but allow anything
else to preempt them, and schedule them with a coarse granularity”. This
improves system responsiveness when you have, say, several compilers
running–your shell or GUI will preempt it. It also improves performance:
if you have four copies of gcc running at once, it would be silly to
preempt them in a cycle every few milliseconds–that would just cause
lots of context switches and cache thrashing.

However, you don’t want this type of scheduling for a game. By waiting
for vsync, you’re helping out the scheduler, by giving up CPU at an
appropriate time; and doing so often (every vsync) means you’re giving up
CPU in bits and pieces, so the scheduler won’t take away a lot of time at
once (causing you to miss frames).

At least, that’s the theory. My experience agrees; for example, I see
more game skips (that’s real frame drops–gameplay skipping for 100ms
or more, not merely expected tearing) with vsync off in Windows.

It looks like what you’re looking for is something like this:

render();
while( !AtVsync() )
do_a_little_work();
flip();

You can’t do this in any OpenGL implementation I know of. It’d be a poor
approach, anyway: it means the rendering thread would be busy-looping
(checking and doing a little work), causing the same problems as above.

If you want to recover the CPU time you’re spending waiting for vsync,
run stuff in a thread. (I run music and video decoding in a thread,
for example.)On Mon, Oct 25, 2004 at 10:35:05PM +0000, Giles Constant wrote:

Is there a way to block until the next vsync when the hardware supports it? It
appears that it’s possible to force OpenGL to syncronise its refreshes with
vsyncs, but not the code itself… I tried calling glFlush(), but it just
returns instantaneously if there’s nothing queued.


Glenn Maynard