Page flipping, double and triple buffering

Hi guys, I’m on the usual Quest for Smooth Animation. I know this is a
common topic but I didn’t find any post that tells the whole story. So
I’ll try to do that, please let me know if I got it right.

  1. Create the video surface with SDL_DOUBLEBUFF | SDL_HWSURFACE.

  2. If it succeeds (flags & SDL_HWSURFACE == SDL_HWSURFACE), you get a
    video memory chunk with enough space for TWO screens. The video card
    displays one of these at a time. So you draw into the offscreen one (the
    one represented by the returned SDL_Surface, right?), and call
    SDL_Flip(), which waits for VSYNC and swaps the visible surface with the
    invisible surface.

  3. Because the back buffer is on hardware, alpha blitting (which I do a
    lot) is slow because it requires read. Therefore, you also allocate a
    second surface, with the same dimensions and format of the screen, but
    as a software surface. You draw on this third buffer, copy the dirty
    parts to the hardware back buffer, and call SDL_Flip(). This is triple
    buffering.

  4. If 1) didn’t succeed, instead you got a software surface (and the
    display memory which you can’t directly touch). You draw on this
    surface, and copy the dirty bits with SDL_UpdateRect(). The copy is
    immediate, no wait for VSYNC, so you get tearing. The best you can do is
    try to have a frame rate close to the monitor’s refresh rate.

I’m I right so far? I’ll try to sort my understanding about dirty rect
management in triple buffering now :slight_smile:

Thanks!
–Gabriel

Gabriel Gambetta wrote:

Hi guys, I’m on the usual Quest for Smooth Animation. I know this is a
common topic but I didn’t find any post that tells the whole story. So
I’ll try to do that, please let me know if I got it right.

  1. Create the video surface with SDL_DOUBLEBUFF | SDL_HWSURFACE.

Yes. Note that SDL_DOUBLEBUF automatically & silently sets
SDL_HWSURFACE, so that there’s not difference between SDL_DOUBLEBUF |
SDL_HWSURFACE and SDL_DOUBLEFUF.
No idea whether this is/should be documented.

  1. If it succeeds (flags & SDL_HWSURFACE == SDL_HWSURFACE), you get a
    video memory chunk with enough space for TWO screens. The video card
    displays one of these at a time. So you draw into the offscreen one (the
    one represented by the returned SDL_Surface, right?), and call
    SDL_Flip(),

Yes.

which waits for VSYNC and swaps the visible surface with the
invisible surface.

You don’t get vsync all the time, this is backend-dependent.

  1. Because the back buffer is on hardware, alpha blitting (which I do a
    lot) is slow because it requires read. Therefore, you also allocate a
    second surface, with the same dimensions and format of the screen, but
    as a software surface. You draw on this third buffer, copy the dirty
    parts to the hardware back buffer, and call SDL_Flip(). This is triple
    buffering.

This is not exactly triple buffering. Triple buffering in the “standard”
understanding is when you have 3 pages in video memory. This is what
Doom does.
In this scheme OTOH you have two pages in video memory and one page in
system memory.
David Olofson calls this semi triple buffering, and I think the name is
quite meaningful, actually.

As for alpha blitting acceleration, well, this is backend-dependent
(heh. We should release glSDL sometime…)

  1. If 1) didn’t succeed, instead you got a software surface (and the
    display memory which you can’t directly touch). You draw on this
    surface, and copy the dirty bits with SDL_UpdateRect(). The copy is
    immediate, no wait for VSYNC, so you get tearing. The best you can do is
    try to have a frame rate close to the monitor’s refresh rate.

Not sure this is a good idea, since you’ll never be at the exact
monitor’s refresh rate (not to mention you don’t always know the refresh
rate…). Then, you’ll get the worse tearing that is. I think you’d
better draw as fast as possible to reduce tearing (considering that it’s
impossible to remove it altogether).

Stephane

Stephane Marchesin wrote:

which waits for VSYNC and swaps the visible surface with the
invisible surface.
You don’t get vsync all the time, this is backend-dependent.

Backend as in “dga and directx yes, x11 no”? Or does it also depend on
the video drivers, screen settings, etc?

So it’s possible to get a double-buffered hardware surface but no VSYNC.
In that case, is there any benefit in doing this as opposed to
destroying the surface and just use the other method (SDL_UpdateRects(),
draw as fast as possible)?

Is it even possible to know for sure if you’re getting vsync or not?

Thanks
–Gabriel

Stephane Marchesin wrote:

which waits for VSYNC and swaps the visible surface with the
invisible surface.
You don’t get vsync all the time, this is backend-dependent.

Backend as in “dga and directx yes, x11 no”? Or does it also depend on
the video drivers, screen settings, etc?

Depends on the video drivers and the screen settings. By default most
drivers turn off vsynch. From the programmers point of view, you might
just as well forget that vsynch exists. Don’t even think about it.

So it’s possible to get a double-buffered hardware surface but no VSYNC.

Yes. Not only possible, but probable.

In that case, is there any benefit in doing this as opposed to
destroying the surface and just use the other method (SDL_UpdateRects(),
draw as fast as possible)?

Double buffering still gives you cleaner updates. I like to throttle the
update rate at around 100 fps just to keep the game from using 100% of
the CPU.

Is it even possible to know for sure if you’re getting vsync or not?

No, not really. You can try looking at the frame rate and if it is
suspiciously close to a common vsync rate you may be getting vsync. If
it is up over a couple of hundred FPS then you are probably not getting
vsync. But, you can’t count on that. Like I said, I like to throttle
around 100 fps. If vsync is enabled throttling will not kick in. If it
isn’t then your game doesn’t try to draw a 100 FPS and use 100% of the
cpu.

	Bob PendletonOn Fri, 2004-10-29 at 08:33, Gabriel Gambetta wrote:

Thanks
–Gabriel


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Bob Pendleton wrote:

Depends on the video drivers and the screen settings. By default most
drivers turn off vsynch. From the programmers point of view, you might
just as well forget that vsynch exists. Don’t even think about it.

Ooooooops. I thought VSYNC was the way to have smooth animation.

Double buffering still gives you cleaner updates.

Why? Because of the flip instead of partial SDL_UpdateRect() calls?

So, according to you, the best choice is doing semi-triple buffering if
the hardware supports it, video surface + back buffer if it doesn’t, and
always try to draw at a fixed FPS around 100?

It is the only way with current display technology, but
unfortunately, your average OS + driver setup does not make high end
animation system. Unless it’s ok that your game looks like crap or
misbehaves horribly on the majority of systems, do not count on
retrace sync to work.

That said, I find it rather annoying when games cannot take advantage
of retrace sync when available. IMHO, the best option is to have the
game work as well as possible with or without retrace sync. Provide
some user options if necessary, but make sure the game works “ok” out
of the box on any supported system.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 29 October 2004 16.09, Gabriel Gambetta wrote:

Bob Pendleton wrote:

Depends on the video drivers and the screen settings. By default
most drivers turn off vsynch. From the programmers point of view,
you might just as well forget that vsynch exists. Don’t even
think about it.

Ooooooops. I thought VSYNC was the way to have smooth animation.

Bob Pendleton wrote:

Depends on the video drivers and the screen settings. By default most
drivers turn off vsynch. From the programmers point of view, you might
just as well forget that vsynch exists. Don’t even think about it.

Ooooooops. I thought VSYNC was the way to have smooth animation.

Double buffering still gives you cleaner updates.

Why? Because of the flip instead of partial SDL_UpdateRect() calls?

No, because it is integrated with the driver and the hardware. The
driver and hardware are really designed to do the best job they can.
After all, they make their money by making your game look good!

So, according to you, the best choice is doing semi-triple buffering if
the hardware supports it, video surface + back buffer if it doesn’t, and
always try to draw at a fixed FPS around 100?

I really don’t think I said that at all. I just answered you questions.
I didn’t say to use any kind of triple buffering. I like triple
buffering, but it has been at least 15 years since I saw a graphics
systems where I could explicitly use triple-buffering. Modern graphics
cards/drivers can use triple-buffering without making you deal with it
and without letting you know that it is even being used.

I know I didn’t tell you to try to draw at a fixed FPS of 100 because I
know that it isn’t possible to do that. There is no technique that will
give you a fixed frame rate on all modern PCs. So, as a commercial game
programmer it is a waste of your time to try. Adapt to the chaos.

The simple fact is that you, as a programmer, can’t know enough about
the underlying graphics system to make these decisions. Every computer
is different. And, new different systems are coming out every day. By
worrying about this stuff you are just wasting your time.

In my opinion your best bet is to use OpenGL and throttle to around 100
FPS. Of course, when throttling you have to worry about problems with
the clock step size. SDL claims a step size of 10 milliseconds, so a
call to SDL_Delay(10) will actually, on average, wait 5 milliseconds and
a call to SDL_Delay(20) will, on average, wait 15 milliseconds. But, of
course, like I said, you can’t count on that.

I cover all these problems in articles linked from:

A lot? Most? Many game programmers, including me, get caught up in this
mental mess at some point. You just have to accept the fact that you
really don’t have any control so just do the simplest thing possible and
count on the graphics hardware and driver to do the best they can. That
is what they are there for.

Of course, if you are not running on a modern PC then everything I said
could be wrong.

	Bob PendletonOn Fri, 2004-10-29 at 09:09, Gabriel Gambetta wrote:

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Bob Pendleton wrote:

Depends on the video drivers and the screen settings. By default
most drivers turn off vsynch. From the programmers point of view,
you might just as well forget that vsynch exists. Don’t even
think about it.

Ooooooops. I thought VSYNC was the way to have smooth animation.

It is the only way with current display technology, but
unfortunately, your average OS + driver setup does not make high end
animation system. Unless it’s ok that your game looks like crap or
misbehaves horribly on the majority of systems, do not count on
retrace sync to work.

That said, I find it rather annoying when games cannot take advantage
of retrace sync when available.

I find it annoying that it isn’t available all the time. But, it isn’t.
All the techniques you have to use to make the game look good without
vsync work when vsync is there, so why even worry about vsync? Forget it
exists and get on with writing code.

IMHO, the best option is to have the
game work as well as possible with or without retrace sync. Provide
some user options if necessary, but make sure the game works “ok” out
of the box on any supported system.

Yep, no choice but to make it work without vsync. And since you can’t
count on vsync, why waste time even given the customer the option of
worrying about it? Most likely the customer doesn’t even know what vsync
is (a boy band, maybe?) and doesn’t care. In fact, they would turn it
off it they new about it because with it on their favorite video game
only draws 80 fps and they can’t brag about that, but they can brag
about their hot PC drawing 357.8 fps…

Give it up. Vsync is dead.

	Bob PendletonOn Fri, 2004-10-29 at 09:45, David Olofson wrote:

On Friday 29 October 2004 16.09, Gabriel Gambetta wrote:

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Frame rate throttling doesn’t mix with retrace sync, but it’s still a
useful feature for laptops and other systems, to avoid wasting all
available power for no signifigant return. The easy way out is to
just not use frame rate throttling, of course. Still works fine on
laptops, although some users will be unhappy with the excessive power
consumption.

Other than that, I have to agree with your; it’s a waste of time to
worry about retrace sync. Properly designed code will work fine
without it, and better with it.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 29 October 2004 16.57, Bob Pendleton wrote:

On Fri, 2004-10-29 at 09:45, David Olofson wrote:

On Friday 29 October 2004 16.09, Gabriel Gambetta wrote:

Bob Pendleton wrote:

Depends on the video drivers and the screen settings. By
default most drivers turn off vsynch. From the programmers
point of view, you might just as well forget that vsynch
exists. Don’t even think about it.

Ooooooops. I thought VSYNC was the way to have smooth
animation.

It is the only way with current display technology, but
unfortunately, your average OS + driver setup does not make high
end animation system. Unless it’s ok that your game looks like
crap or misbehaves horribly on the majority of systems, do not
count on retrace sync to work.

That said, I find it rather annoying when games cannot take
advantage of retrace sync when available.

I find it annoying that it isn’t available all the time. But, it
isn’t. All the techniques you have to use to make the game look
good without vsync work when vsync is there, so why even worry
about vsync? Forget it exists and get on with writing code.

Depends on the video drivers and the screen settings. By default most
drivers turn off vsynch. From the programmers point of view, you might
just as well forget that vsynch exists. Don’t even think about it.

“Close your eyes and ignore the problem” is rarely a useful option. If
you want smooth animation, you need to think about vsync, to make sure
that you’re doing everything reasonable to enable it.

Maybe 2d APIs can’t handle vsync, but if that’s the case, it’s just
another reason to use OpenGL. In OpenGL, you have wglSwapIntervalEXT
in Windows and kCGLCPSwapInterval in OSX, at least. X gives no API,
unfortunately–a pretty sore embarrassment for OSS. (nVidia’s X11
drivers force you to set an environment variable, if you can believe
it, but it’s better than nothing.)

No, not really. You can try looking at the frame rate and if it is
suspiciously close to a common vsync rate you may be getting vsync. If
it is up over a couple of hundred FPS then you are probably not getting
vsync. But, you can’t count on that. Like I said, I like to throttle
around 100 fps. If vsync is enabled throttling will not kick in. If it
isn’t then your game doesn’t try to draw a 100 FPS and use 100% of the
cpu.

This is risky. If your monitor is at 100Hz, and you’re throttling to 100FPS,
it’s easily possible that you’ll pace vsync, causing consistent tearing, or
(more likely) run in line with it but slightly out of pace, causing
intermittent tearing.

How are you throttling? Trying to limit it by sleeping would involve
lots of sub-10ms sleeps, which tend to be rounded up to 10ms on Linux
2.4.On Fri, Oct 29, 2004 at 08:46:45AM -0500, Bob Pendleton wrote:


Glenn Maynard

(snip)
Maybe 2d APIs can’t handle vsync, but if that’s the case, it’s just
another reason to use OpenGL. In OpenGL, you have wglSwapIntervalEXT
in Windows and kCGLCPSwapInterval in OSX, at least. X gives no API,
unfortunately–a pretty sore embarrassment for OSS.

I’m fairly new at OpenGL programming so I may be wrong about this, but doesn’t
the GLX_SGI_video_sync extension provide the corresponding vsync functions
under X?
http://oss.sgi.com/projects/ogl-sample/registry/SGI/video_sync.txt
nVidia’s GLX supports it, and some DRI drivers too I think.

(nVidia’s X11
drivers force you to set an environment variable, if you can believe
it, but it’s better than nothing.)

What exactly do you mean by “better than nothing”? I think it’s great to be
able to force vsync on every OpenGL app. It’s one of those things that make
nvidia’s drivers so great. And you don’t need to mess up with environment
variables anymore nowadays, as the nvidia-settings utility can set that too
(as well as on video overlays).On October 29, 2004 1929, Glenn Maynard wrote:

I’m fairly new at OpenGL programming so I may be wrong about this, but doesn’t
the GLX_SGI_video_sync extension provide the corresponding vsync functions
under X?
http://oss.sgi.com/projects/ogl-sample/registry/SGI/video_sync.txt
nVidia’s GLX supports it, and some DRI drivers too I think.

I’ll give it a try, though having a separate sleep, instead of having
the flip happen on the vsync interrupt, makes me a little nervous of
scheduler timing.

What exactly do you mean by “better than nothing”? I think it’s great to be
able to force vsync on every OpenGL app. It’s one of those things that make
nvidia’s drivers so great. And you don’t need to mess up with environment
variables anymore nowadays, as the nvidia-settings utility can set that too
(as well as on video overlays).

I mean that an environment variable is not a useful way for applications
to configure things, and that’s what I’m talking about here.On Sat, Oct 30, 2004 at 08:44:13PM -0400, Simon Roby wrote:


Glenn Maynard