SDL Graphics Tutorials Run SLOW for Modern PC

[…]

I’m using it for a
custom GUI for a PIII 733 MHz machine where the application
requires smooth scrolling text over a background image.

Unless you’re using a very low resolution, that kind of stuff requires
quite some bandwidth. The way modern video cards and computers are
designed, the only way to do that properly is through h/w
acceleration, and the only remotely reliable, portable and well
performing solution at this point is OpenGL.

Also note that if you want it really smooth you need

  1. double buffering with hardware page flipping

  2. retrace sync

  3. sub-pixel accurate rendering

  4. and 2) are totally driver and system dependent, so unless you’re
    working on a turnkey solution (complete hardware + software
    solution), you can’t rely on getting that. They are, however, an
    absolute requirement for perfectly smooth, tearing free scrolling.

  5. is needed unless you can accept scrolling speeds that are even
    multiples of whatever refresh rate you may get. (Hardware and
    configuration dependent.) There will always be some “wobbling” unless
    you scroll the exact same number of pixels every frame at a constant
    frame rate - so if the refresh rate doesn’t fit the desired speed,
    you’ll have to deal with fractional pixels.

Have a look at “smoothscroll” here:
http://olofson.net/examples.html

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 21 December 2004 04.25, G B wrote:

[…]

I’m using it for a
custom GUI for a PIII 733 MHz machine where the application
requires smooth scrolling text over a background image.

Unless you’re using a very low resolution, that kind of stuff requires
quite some bandwidth. The way modern video cards and computers are
designed, the only way to do that properly is through h/w
acceleration, and the only remotely reliable, portable and well
performing solution at this point is OpenGL.

I tried to bring this up a few weeks ago but it didn’t seem to interest anyone…

Smooth scrolling text over a background image using OpenGL is just not practical on a PIII 733 at resolutions any higher than 640x480, assuming that the background image and text are not scaled up.

But software SDL will do the job just fine with next to no CPU usage, supposing further that the background image is not changing (or moving).

It’s just barely practical to do this with OpenGL on my Athlon 1600 w/ Radeon 7500 at 1280x1024 and under X11 I see significant CPU usage.

The problem with OpenGL that is very important to 2D work is that if you use double buffering, you have to redraw the whole frame every time because the back buffer is undefined after a swap.

Most of the time on Linux under X11 it’ll be exactly the same as the front buffer (a copy), but with some Windows drivers it’ll be the previous front buffer (a swap), and I have seen cases on new cards where it is garbage (literally undefined).

There are extensions that supposedly allow you to specify what behaviour you’d like, but obviously you can’t rely on them being available.

So to guarantee that you don’t induce epilepsy in your end users, you have to redraw each frame from scratch.

You can be smart and render to textures so that most of the redraw operation is just a handful of textured quads, but that requires a lot bandwidth that only a decent new machine can manage (if the textures aren’t scaled up)

You also need a fair amount of texture memory, and you can’t rely on it all being available to you so you may yet pull that hi-rez texture across the bus each and every frame.

The fastest method for this particular application on a slower machine is software SDL because you can rely on the back buffer state and since your scrolling text is presumably only changing a small amount of the screen, you save a massive amount of bandwidth by only sending the changes.

David also made a critical point: you must scroll your text at multiples of the refresh rate to avoid wobble.

But if your background is changing, then by all means everybody else is right: OpenGL is the way to go.

A number of people have said that SDL will get an OpenGL backend soon, so would one of the developers please explain how they get around this?

i.e. how are the current simple semantics of SDL double buffering maintained under OpenGL when that spec says “the backbuffer is undefined after a swap” ?

 ScottFrom: David Olofson <david at olofson.net>

Date: Tue, 21 Dec 2004 15:44:05 +0100
On Tuesday 21 December 2004 04.25, G B wrote:

Scott Cooper wrote:

[…]

I’m using it for a
custom GUI for a PIII 733 MHz machine where the application
requires smooth scrolling text over a background image.

Unless you’re using a very low resolution, that kind of stuff requires
quite some bandwidth. The way modern video cards and computers are
designed, the only way to do that properly is through h/w
acceleration, and the only remotely reliable, portable and well
performing solution at this point is OpenGL.

I tried to bring this up a few weeks ago but it didn’t seem to interest anyone…

Smooth scrolling text over a background image using OpenGL is just not practical on a PIII 733 at resolutions any higher than 640x480, assuming that the background image and text are not scaled up.

But software SDL will do the job just fine with next to no CPU usage, supposing further that the background image is not changing (or moving).

It’s just barely practical to do this with OpenGL on my Athlon 1600 w/ Radeon 7500 at 1280x1024 and under X11 I see significant CPU usage.

The problem with OpenGL that is very important to 2D work is that if you use double buffering, you have to redraw the whole frame every time because the back buffer is undefined after a swap.

Most of the time on Linux under X11 it’ll be exactly the same as the front buffer (a copy), but with some Windows drivers it’ll be the previous front buffer (a swap), and I have seen cases on new cards where it is garbage (literally undefined).

There are extensions that supposedly allow you to specify what behaviour you’d like, but obviously you can’t rely on them being available.

So to guarantee that you don’t induce epilepsy in your end users, you have to redraw each frame from scratch.

You can be smart and render to textures so that most of the redraw operation is just a handful of textured quads, but that requires a lot bandwidth that only a decent new machine can manage (if the textures aren’t scaled up)

You also need a fair amount of texture memory, and you can’t rely on it all being available to you so you may yet pull that hi-rez texture across the bus each and every frame.

The fastest method for this particular application on a slower machine is software SDL because you can rely on the back buffer state and since your scrolling text is presumably only changing a small amount of the screen, you save a massive amount of bandwidth by only sending the changes.

David also made a critical point: you must scroll your text at multiples of the refresh rate to avoid wobble.

But if your background is changing, then by all means everybody else is right: OpenGL is the way to go.

A number of people have said that SDL will get an OpenGL backend soon, so would one of the developers please explain how they get around this?

i.e. how are the current simple semantics of SDL double buffering maintained under OpenGL when that spec says “the backbuffer is undefined after a swap” ?

Where (as in, in which official SDL documentation) is such a double
buffering semantics defined ?

Stephane> From: David Olofson

Date: Tue, 21 Dec 2004 15:44:05 +0100
On Tuesday 21 December 2004 04.25, G B wrote:

i.e. how are the current simple semantics of SDL double buffering maintained under OpenGL when that spec says “the backbuffer is undefined after a swap” ?

Where (as in, in which official SDL documentation) is such a double
buffering semantics defined ?

Stephane

The man page for SDL_Flip says:

"On hardware that supports double-buffering, this function sets up a flip and returns. The hardware will wait for vertical retrace, and then swap video buffers before the next video surface blit or lock will return. On hardware that doesn't support double-buffering, this is equivalent to calling SDL_UpdateRect(screen, 0, 0, 0, 0)

The SDL_DOUBLEBUF flag must have been passed to SDL_SetVideoMode, when setting the video mode for this function to perform hardware flipping."

Because of the words “swap” and “flip” and the lack of a caveat, I have always assumed this means “if screen->flags & SDL_DOUBLEBUF then the front and buffers are exchanged on SDL_Flip”

David Olofson’s pig is often used here as a good example of how to do things, and it makes clear use of this - e.g. he talks about maintaining dirty rectangles per frame in the double buffer case.

On the other hand, the man page of SDL_GL_SwapBuffers says:

“Swap the OpenGL buffers, if double-buffering is supported.”

Which is misleading since the swap (e.g. on X11) is done by glXSwapBuffers, whose man page says:

  "glXSwapBuffers promotes the contents of the back buffer of drawable  to  become  the  contents of the front buffer of drawable.  The contents of the  back  buffer  then  become undefined.   The  update  typically takes place during the vertical retrace of the monitor, rather  than  immediately after glXSwapBuffers is called."

If the semantics of SDL_Flip were that the back buffer is undefined after a swap, then nobody would bother with dirty rectangles on double buffered displays since the only option is to redraw the whole frame…

ScottDate: Wed, 22 Dec 2004 13:24:22 +0100
From: Stephane Marchesin <stephane.marchesin at wanadoo.fr>

Scott Cooper wrote:

The man page for SDL_Flip says:

"On hardware that supports double-buffering, this function sets up a flip and returns. The hardware will wait for vertical retrace, and then swap video buffers before the next video surface blit or lock will return. On hardware that doesn’t support double-buffering, this is equivalent to calling SDL_UpdateRect(screen, 0, 0, 0, 0)

The SDL_DOUBLEBUF flag must have been passed to SDL_SetVideoMode, when setting the v
ideo mode for this function to perform hardware flipping."

Because of the words “swap” and “flip” and the lack of a caveat, I have always assumed this means “if screen->flags & SDL_DOUBLEBUF then the front and buffers are exchanged on SDL_Flip”

Funny you say that, because the calssic dirty rects technique relies on
the exact opposite behaviour : have the back surface keep the
information from the previous frame, which means double buffering is
achieved using a copy and not a flip.

Anyway AFAIK, the way doublebuffering is achieved is not specified in
the documentation. Some backends will do a copy (directx, directfb…),
some others will do a real flip (fbcon, svga, dreamcast, dga…). If you
want to keep your code portable, you know what to do…

Now, maybe the semantics wasn’t inteneded to be that way (I don’t know).
But looking at the code, that’s how it behaves.

David Olofson’s pig is often used here as a good example of how to do things, and it makes clear use of this - e.g. he talks about maintaining dirty rectangles per frame in the double buffer case.

I haven’t looked closely at the pig code, so I can’t say. But I’ve seen
other apps that assume a copy-style double buffering scheme which
usually results in some kind of blinking or "alternating frames"
situation. If you’ve ever used SDL with the framebuffer backend, you’ve
probably seen this (that’s what got me digging into this in the first
place).

On the other hand, the man page of SDL_GL_SwapBuffers says:

“Swap the OpenGL buffers, if double-buffering is supported.”

Which is misleading since the swap (e.g. on X11) is done by glXSwapBuffers, whose man page says:

 "glXSwapBuffers promotes the contents of the back buffer of drawable  to  become  the  contents of the front buffer of drawable.  The contents of the  back  buffer  then  become undefined.   The  update  typically takes place during the vertical retrace of the monitor, rather  than  immediately after glXSwapBuffers is called."

Well, I think that is less problematic with OpenGL, since the behaviour
was specified by the OpenGL standard.

If the semantics of SDL_Flip were that the back buffer is undefined after a swap, then nobody would bother with dirty rectangles on double buffered displays since the only option is to redraw the whole frame…

The behaviour is never undefined. It’s one of :

  • the back surface holds the n-2 frame (flip-style double buffering)
  • the back surface holds the n-1 frame (copy-style double buffering or
    shadow surface)
    It’s just that you have no way to know in which case you are.
    Dirty rects techniques can deal with both situations (just use dirty
    rects that are merged from the two previous frames).

That said, all this is not documented, it’s just how things work from a
pragmatic standpoint obtained looking at the code. So I’m not sure
whether you should rely on this or not.

Stephane

[…]

David Olofson’s pig is often used here as a good example of how to
do things, and it makes clear use of this - e.g. he talks about
maintaining dirty rectangles per frame in the double buffer case.

I haven’t looked closely at the pig code, so I can’t say. But I’ve
seen other apps that assume a copy-style double buffering scheme
which usually results in some kind of blinking or “alternating
frames” situation. If you’ve ever used SDL with the framebuffer
backend, you’ve probably seen this (that’s what got me digging into
this in the first place).

Fixed Rate Pig assumes that there are two buffers and page flipping;
that’s why there are two sets of dirtyrects. That still works with a
copying implementation; it just restores a few pixels here and there
that wouldn’t have to be touched if we could tell that the backend
uses copying “flips”.

Now, it does not work with backends that somehow put actual garbage
in the buffer. (Haven’t seen one yet, but some say there are drivers
that do this… Maybe some copy-flip implementations that do in-place
conversions do this? Can’t think of any other reason to actually ruin
buffers when flipping.)

Nor does it work with triple buffering + page flipping. One would need
three sets of dirtyrects for that.

[…]

If the semantics of SDL_Flip were that the back buffer is
undefined after a swap, then nobody would bother with dirty
rectangles on double buffered displays since the only option is
to redraw the whole frame…

The behaviour is never undefined. It’s one of :

  • the back surface holds the n-2 frame (flip-style double
    buffering)
  • the back surface holds the n-1 frame (copy-style
    double buffering or shadow surface)

…or, at least in theory:

  • the back surfare contains garbage left from some conversions done
    before the actual flip. (Flip or copy style, dealing with a pixel
    format that is unsupported by the RAMDAC or whatever.)

It’s just that you have no way to know in which case you are.
Dirty rects techniques can deal with both situations (just use
dirty rects that are merged from the two previous frames).

Yeah, that’s what Pig does… Doesn’t make much of a difference with
still backgrounds and a few sprites moving at reasonable speeds.

That said, all this is not documented, it’s just how things work
from a pragmatic standpoint obtained looking at the code. So I’m
not sure whether you should rely on this or not.

Is there actually a guarantee that buffers are not (ab)used for
in-place conversions and stuff? I remember seeing “undefined” in most
graphics API docs - and it could really mean undefined in some cases;
not just that you’ll find an old page of undefined age.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 23 December 2004 01.39, Stephane Marchesin wrote:

Scott Cooper wrote:

On the other hand, the man page of SDL_GL_SwapBuffers says:

“Swap the OpenGL buffers, if double-buffering is supported.”

Which is misleading since the swap (e.g. on X11) is done by
glXSwapBuffers, whose man page says:

 "glXSwapBuffers promotes the contents of the back buffer of 

drawable to become the contents of the front buffer of drawable.
The contents of the back buffer then become undefined. The
update typically takes place during the vertical retrace of the
monitor, rather than immediately after glXSwapBuffers is called."

Well, I think that is less problematic with OpenGL, since the
behaviour was specified by the OpenGL standard.

Well, for OpenGL, you’re probably doing a glClear on every frame that
you update anyway… so it’s not as big of a deal.

-bobOn Dec 22, 2004, at 7:39 PM, Stephane Marchesin wrote:

On the contrary, that would just be a waste of bandwidth, since you’ll
normally redraw the whole screen anyway (that’s why map bugs in most
3D games generate HOM and not black holes) - but that has the same
effect, of course; previous contents don’t matter.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 23 December 2004 02.17, Bob Ippolito wrote:

On Dec 22, 2004, at 7:39 PM, Stephane Marchesin wrote:

Scott Cooper wrote:

On the other hand, the man page of SDL_GL_SwapBuffers says:

“Swap the OpenGL buffers, if double-buffering is supported.”

Which is misleading since the swap (e.g. on X11) is done by
glXSwapBuffers, whose man page says:

 "glXSwapBuffers promotes the contents of the back buffer of

drawable to become the contents of the front buffer of
drawable. The contents of the back buffer then become
undefined. The update typically takes place during the
vertical retrace of the monitor, rather than immediately after
glXSwapBuffers is called."

Well, I think that is less problematic with OpenGL, since the
behaviour was specified by the OpenGL standard.

Well, for OpenGL, you’re probably doing a glClear on every frame
that you update anyway… so it’s not as big of a deal.

Well, for OpenGL, you’re probably doing a glClear on every frame that
you update anyway… so it’s not as big of a deal.

That’s true for 3D work, but not for a lot of 2D work and especially GUIs where much of the screen remains static between frames (so it seems sensible to avoid redraw).

The point I wanted to make is that you can’t just switch your 2D app to OpenGL and expect a vast speed increase and you might even get a major slowdown, simply because the back buffer is undefined after a swap.

I think that’s important with all this talk of an OpenGL backend for SDL…

But what Stephane says is a bit of a concern: I really had assumed that if screen->flags & SDL_DOUBLEBUFFER, then there were 2 buffers and they would be swapped, not copied, on SDL_Flip.

David/Stephane: I think dirty rects won’t work unless you know for sure whether you’ve got a copy or a swap if you are alpha blending to that surface.

i.e. if your surface was really copied and so you redraw more than required, you might blend it twice and acheive a different result.

As it happens, Pig uses a software back buffer to do its blends when it detects a hardware display surface. Since you need a hardware surface for double buffering (?), that implies that Pig is always using a software back buffer for blends whenever there’s double buffering.

Since the software back buffer is just copied straight to the hardware surface without blending, this technique does work even if the buffer swap is actually a copy.

But with OpenGL, you get “proper” blending and no need to use a backing surface (which you really have to go out of your way to get anyway), so you have to make sure you don’t blend an area twice: you do need to know the state of the back buffer.

I have seen a number of OpenGL apps take the following approach:

The user specifies on the command line “my back buffer is copied” or “my back buffer is swapped” or the app will assume it’s undefined and redraw each frame.

Terrible kludge…any idea how the OpenGL backend for SDL will handle this?

 ScottFrom: Bob Ippolito <bob at redivi.com>

Date: Wed, 22 Dec 2004 20:17:31 -0500

Scott Cooper wrote:

Well, for OpenGL, you’re probably doing a glClear on every frame that
you update anyway… so it’s not as big of a deal.

That’s true for 3D work, but not for a lot of 2D work and especially GUIs where much of the screen remains static between frames (so it seems sensible to avoid redraw).

The point I wanted to make is that you can’t just switch your 2D app to OpenGL and expect a vast speed increase and you might even get a major slowdown, simply because the back buffer is undefined after a swap.

There’s a balance to make between that and the fact that a flip style
double buffering often gets you a speed increase over a copy style
double buffering.

I think that’s important with all this talk of an OpenGL backend for SDL…

But what Stephane says is a bit of a concern: I really had assumed that if screen->flags & SDL_DOUBLEBUFFER, then there were 2 buffers and they would be swapped, not copied, on SDL_Flip.

As it happens, it’s not the case on the most widespread platforms
(directx/x11).

David/Stephane: I think dirty rects won’t work unless you know for sure whether you’ve got a copy or a swap if you are alpha blending to that surface.

They will. Think about it a bit more (think larger rectangles, that
cover the n-1 frame and the n-2 frame changes at the same time).
That’s been done for ages on different architectures and different
situations.

i.e. if your surface was really copied and so you redraw more than required, you might blend it twice and acheive a different result.

As it happens, Pig uses a software back buffer to do its blends when it detects a hardware display surface. Since you need a hardware surface for double buffering (?), that implies that Pig is always using a software back buffer for blends whenever there’s double buffering.

Since the software back buffer is just copied straight to the hardware surface without blending, this technique does work even if the buffer swap is actually a copy.

But with OpenGL, you get “proper” blending and no need to use a backing surface (which you really have to go out of your way to get anyway), so you have to make sure you don’t blend an area twice: you do need to know the state of the back buffer.

I have seen a number of OpenGL apps take the following approach:

The user specifies on the command line “my back buffer is copied” or “my back buffer is swapped” or the app will assume it’s undefined and redraw each frame.

Terrible kludge…any idea how the OpenGL backend for SDL will handle this?

Depending on OpenGL implementations and environment, you get either a
copy-style double buffer or a flip-style double buffer (or as David said
an undefined situation, although I never saw that one happen).
glSDL will provide what’s underneath, really.

From the SDL viewpoint, I’m not sure what to do. Having a flip-style
double buffering is actually faster than a copy-style double buffering,
and a copy-style double buffering is more compatible with careless
applications.

Or maybe some thinking should happen before 1.3 (we could either have
the creation of a shadow surface forced by a flag to SDL_SetVideoMode,
or have a new SDL_DOUBLEBUF_FLIP flag…).
But if we assume the double buffering should take place as a copy, the
current situation doesn’t sound fine to me, because it would mean most
backends just provide a dumb frame buffer access and little to no
acceleration.

Sam, do you have any comments ? The current situation is fuzzy, but what
is the future ?

Stephane> From: Bob Ippolito

Date: Wed, 22 Dec 2004 20:17:31 -0500

[…]

David/Stephane: I think dirty rects won’t work unless you know for
sure whether you’ve got a copy or a swap if you are alpha blending
to that surface.

i.e. if your surface was really copied and so you redraw more than
required, you might blend it twice and acheive a different result.

It does work with alpha blending or any other effects as well, as long
as you restore the background (remove old sprite image) before you
draw.

If you want incremental rendering effects without dirtyrects, you have
to render off-screen no matter what, whereas with dirty rects, you
could render directly to the screen, provided you knew whether the
backend is copying or flipping. If you don’t know, you just have to
do it the same way you would without dirty rects.

One could put it this way: Dirty rects is exactly like repainting the
whole screen, except you avoid repainting areas that are totally
unchanged. If you can’t tell wheter copying or flipping is used,
dirty areas should be redrawn - not modified.

As it happens, Pig uses a software back buffer to do its blends
when it detects a hardware display surface.

Pig uses a software back buffer only to avoid doing software alpha
blending directly in VRAM. A sloppy performance hack, that is. One
should check for hardware accelerated alpha blending before deciding
to use an extra software buffer.

[…]

Since the software back buffer is just copied straight to the
hardware surface without blending, this technique does work even if
the buffer swap is actually a copy.

It would work anyway, because the net result of a complete update of a
rectangle is not a blending blit. There is always an opaque blit that
restores the background first thing.

Actually, “restore rects” or something might be a more appropriate
name, because that’s really what they’re about in Pig. They do double
as typical dirty rects as well, but only as a side effect of not
rendering directly into hardware surfaces. The very reason there are
two sets of them (one per page) is to restore areas correctly before
rendering sprites.

But with OpenGL, you get “proper” blending and no need to use a
backing surface (which you really have to go out of your way to get
anyway), so you have to make sure you don’t blend an area twice:
you do need to know the state of the back buffer.

You have the same problem with sprites without alpha, or even if you
just repaint full rectangles with opaque blits. The problem is not
the graphics in the areas you’re going to repaint (because you always
repaint them completely anyway), but figuring out exactly where those
areas are, or rather, how old your buffers are when you get them
back.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 23 December 2004 03.57, Scott Cooper wrote:

[…]

David/Stephane: I think dirty rects won’t work unless you know for
sure whether you’ve got a copy or a swap if you are alpha blending
to that surface.

i.e. if your surface was really copied and so you redraw more than
required, you might blend it twice and acheive a different result.

It does work with alpha blending or any other effects as well, as long
as you restore the background (remove old sprite image) before you
draw.

Yes Stephane/David you are right of course, I am tripping.

As long as you lay down something 100% opaque first then it doesn’t matter.

Now if nobody can substantiate my claim that an OpenGL back buffer can become garbage after a swap, then I’ll just crawl into a hole somewhere and rewrite my 2D stuff for OpenGL like I should…

ScottFrom: David Olofson <david at olofson.net>

Date: Thu, 23 Dec 2004 13:00:51 +0100
On Thursday 23 December 2004 03.57, Scott Cooper wrote:

[…]

Now if nobody can substantiate my claim that an OpenGL back buffer
can become garbage after a swap, then I’ll just crawl into a hole
somewhere and rewrite my 2D stuff for OpenGL like I should…

Well, I’ve suggested reasons why some driver might do it (in-place
conversion), but I haven’t seen it so far…

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 23 December 2004 14.00, Scott Cooper wrote:

Sam, do you have any comments ? The current situation is fuzzy, but what
is the future ?

I have always assumed that if you request and get SDL_DOUBLEBUFFER then you
have exactly two framebuffers and the flip will swap between them,
synchronizing with vertical retrace if possible.

Of course the driver is always free to ignore the flag and return a video
mode without double buffering, and the app must be able to handle it. :slight_smile:

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment