glSDL on an existing game

Hi,

I am trying to put glSDL into my game. It seems to almost work. I use
SDL_GLSDL as my only flag when I want OpenGL on startup (a command line
argument activates this). I get a window that shows me what I want to
see with flicker. The frames per second is really bad too. I read
somewhere about not blitting directly to the screen but I am unsure of
how to do this exactly. Here is how I blit:

void DrawIMG(SDL_Surface *img, int x, int y)
{
SDL_Surface *screen=SDL_GetVideoSurface();
SDL_Rect dest;
dest.x = x;
dest.y = y;
SDL_BlitSurface(img, NULL, screen, &dest);
}

SDL_Surface *bg = LoadIMG(“gfx/startupbg.png”);
int done=0;
while (!done)
{
SDL_FillRect(screen,0,0);
DrawIMG(bg,0,0);
SDL_Flip(screen);
}

This is a very oversimplified version of what I do, but it still gets me
flicker and a very slow framerate. I also draw lines, boxes, and
SDL_TTF fonts.

Thanks in advance,
TomT64

TomT64 wrote:

Hi,

I am trying to put glSDL into my game. It seems to almost work. I
use SDL_GLSDL as my only flag when I want OpenGL on startup (a command
line argument activates this). I get a window that shows me what I
want to see with flicker. The frames per second is really bad too. I
read somewhere about not blitting directly to the screen but I am
unsure of how to do this exactly. Here is how I blit:

void DrawIMG(SDL_Surface *img, int x, int y)
{
SDL_Surface *screen=SDL_GetVideoSurface();
SDL_Rect dest;
dest.x = x;
dest.y = y;
SDL_BlitSurface(img, NULL, screen, &dest);
}

SDL_Surface *bg = LoadIMG(“gfx/startupbg.png”);
int done=0;
while (!done)
{
SDL_FillRect(screen,0,0);
DrawIMG(bg,0,0);
SDL_Flip(screen);
}

This is a very oversimplified version of what I do, but it still gets
me flicker and a very slow framerate. I also draw lines, boxes, and
SDL_TTF fonts.

Thanks in advance,
TomT64

Just a quick guess. Your background has the size of the screen ? Then,
you overwrite the whole screen with your DrawIMG call, so, the FillRect
is useless. Withdraw this line, it will improve your frame rate.

Julien

Hi,

I am trying to put glSDL into my game. It seems to almost work. I
use SDL_GLSDL as my only flag when I want OpenGL on startup (a
command line argument activates this). I get a window that shows
me what I want to see with flicker.

You must use SDL_DOUBLEBUF to get double buffering - even on Linux,
in some cases. (Yes, some Linux OpenGL drivers really do complex
clipping and rendering directly into the window if you tell them to.
I think most drivers do this on other platforms.) You should use
double buffering anyway, since the “fake” double buffering you (may)
get otherwise may not provide retrace sync’ed flips.

So, with OpenGL or glSDL, without SDL_DOUBLEBUF, you’re likely to get
either tearing or horrible flickering. Use SDL_DOUBLEBUF at all
times, unless you really know what you’re doing.

The frames per second is
really bad too.

Are you sure you get the right OpenGL lib? Do you have accelerated
OpenGL at all?

Or translated; Are SDL applications using OpenGL natively accelerated?
Do any OpenGL applications get acceleration?

I read somewhere about not blitting directly to
the screen but I am unsure of how to do this exactly.

That’s about doing s/w rendering into the screen. That’s not what
happens when you (gl)SDL_BlitSurface to the screen - which is, in
fact the only way to have glSDL accelerate your blits at all. (Blits
between off-screen surfaces cannot be accelerated by OpenGL, at least
not without uncommon extensions.)

Here is how
I blit:

void DrawIMG(SDL_Surface *img, int x, int y)
{
SDL_Surface *screen=SDL_GetVideoSurface();
SDL_Rect dest;
dest.x = x;
dest.y = y;
SDL_BlitSurface(img, NULL, screen, &dest);
}

SDL_Surface *bg = LoadIMG(“gfx/startupbg.png”);
int done=0;
while (!done)
{
SDL_FillRect(screen,0,0);
DrawIMG(bg,0,0);
SDL_Flip(screen);
}

That’s fine (with the glSDL wrapper, mostly because it’s not 100% SDL
2D API compliant), but you really should SDL_DisplayFormat() the
image after loading it.

The glSDL wrapper will cheat and gett full acceleration either way
(but it’ll fail if you start modifying the image as a procedural
texture), but the backend version will have to convert and upload the
the surface for every single blit ==> dog slow rendering, especially
if your OpenGL subsystem doesn’t use DMA for texture uploading.

I also draw lines,

How? (I’m afraid I have bad news…)

boxes,

I’d strongly recommend using SDL_FillRect() for filled boxes, vertical
and horizontal lines and hollow boxes… (It translates to rathe
efficient OpenGL rendering operations, so short of avoiding some
function call overhead, you can’t do it much faster than that even
with native OpenGL code.)

and SDL_TTF fonts.

Problem: These will have to be handled as procedural surfaces. That
is, render the text into a surface, SDL_DisplayFormat() that, and
blit it to the screen. Don’t update the text surfaces more often than
you have to, because the uploading can be very expensive on some
systems, even if the rest of the OpenGL subsystem is very fast.

If you want scrolling text and the like, rather implement it like
emulated hardware scrolling; render text every now and then
(preferably into a bunch of small surfaces, to avoid occasional
stalls as a large texture is uploaded), and then scroll the resulting
surfaces around. You can use clipping to keep the text inside a box
and stuff like that.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Wednesday 03 March 2004 10.16, TomT64 wrote:

So, with OpenGL or glSDL, without SDL_DOUBLEBUF, you’re likely to get
either tearing or horrible flickering. Use SDL_DOUBLEBUF at all
times, unless you really know what you’re doing.

Two points:

  1. SDL_DOUBLEBUF implicitly requests a hardware surface in 2D, and this
    isn’t always what you want.
  2. SDL_DOUBLEBUF has no effect in SDL OpenGL mode.

These points may not apply to glSDL, I’ll let David comment.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Right; the above is about glSDL, which differs slightly from both SDL
2D and native OpenGL;

glSDL translates SDL_DOUBLEBUF into the corresponding GL attribute,
SDL_GL_DOUBLEBUFFER, which means that without SDL_DOUBLEBUF, you may
get a true single buffered display, where rendering is done polygon
by polygon directly into the window. With SDL_DOUBLEBUF, you (should
always) get some form of double buffered display - blitting or
flipping, but never rendering directly into the current display page.
What you actually get depends on the platform, the OpenGL driver, and
whether you’re running in windowed or fullscreen mode.

Normally, you should never ask for a single buffered OpenGL (or glSDL)
display, because you may well get it on some systems - and it’s
basically useless for normal animation. It’s roughly equivalent to
asking for a single buffered hardware surface for an SDL 2D display,
which is only useful if you do your own buffering, or render
completely without overdraw.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 05 March 2004 05.48, Sam Lantinga wrote:

So, with OpenGL or glSDL, without SDL_DOUBLEBUF, you’re likely to
get either tearing or horrible flickering. Use SDL_DOUBLEBUF at
all times, unless you really know what you’re doing.

Two points:

  1. SDL_DOUBLEBUF implicitly requests a hardware surface in 2D, and
    this isn’t always what you want.
  2. SDL_DOUBLEBUF has no effect in SDL OpenGL mode.

These points may not apply to glSDL, I’ll let David comment.

glSDL translates SDL_DOUBLEBUF into the corresponding GL attribute,

Since the SDL_DOUBLEBUF flag in glSDL has very different semantics from
the 2D SDL, and you’re trying to make glSDL a transparent layer, I recommend
that you use the SDL GL double buffered attribute instead, which defaults
to double buffering. The only time an app might want single buffering is
if they’re debugging OpenGL, which they’ll only do if they know it’s glSDL
behind the scenes. :slight_smile:

Basically, glSDL should preserve the same semantics as the SDL 2D API
wherever possible.

See ya!
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Yeah, that’s the idea. However, if double buffering is used by
default, there’s another problem: You may get a h/w page flipping
display without asking for it, and that may break applications that
expect a s/w shadow surface. (Partial updates break if you dont know
how many buffers there are, for example.)

The easy cases, and how the should be handled:

SDL_DOUBLEBUF | SDL_HWSURFACE: (dbl buffered h/w surface)
	* Set up a double buffered OpenGL context
	* Blits to the screen are OpenGL accelerated

SDL_HWSURFACE: (single buffered h/w surface)
	* Set up a single buffered OpenGL context
	* Blits to the screen are OpenGL accelerated

Then there’s the s/w surface case, where, if I understand it
correctly, applications should be able to assume that what they see
as the display surface is a single shadow surface. glSDL can’t really
handle this properly without basically giving up all acceleration, or
(maybe) through some smart “render everything twice” logic behind the
scenes.

Well, there is an exception: Things work just fine if the OpenGL
driver implements double buffering by blitting from an off-screen
buffer. Unfortunately, there’s no way (AFAIK) to request or find out
when a driver does that, so we have to assume that there are always
two buffers and h/w page flipping… :-/

Now, drivers that support accelerated OpenGL rendering into textures
one way or another would be a different matter…

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 05 March 2004 15.56, Sam Lantinga wrote:

glSDL translates SDL_DOUBLEBUF into the corresponding GL
attribute,

Since the SDL_DOUBLEBUF flag in glSDL has very different semantics
from the 2D SDL, and you’re trying to make glSDL a transparent
layer, I recommend that you use the SDL GL double buffered
attribute instead, which defaults to double buffering. The only
time an app might want single buffering is if they’re debugging
OpenGL, which they’ll only do if they know it’s glSDL behind the
scenes. :slight_smile:

Basically, glSDL should preserve the same semantics as the SDL 2D
API wherever possible.

David Olofson wrote:

glSDL translates SDL_DOUBLEBUF into the corresponding GL
attribute,

Since the SDL_DOUBLEBUF flag in glSDL has very different semantics
from the 2D SDL, and you’re trying to make glSDL a transparent
layer, I recommend that you use the SDL GL double buffered
attribute instead, which defaults to double buffering.

That’s what we do already, at least in the version I’m working on. The
SDL_DOUBLEBUF flag is totally and silently ignored.
Also, we don’t touch the GL double buffered attribue at all and let the
default value “1”.

The only
time an app might want single buffering is if they’re debugging
OpenGL, which they’ll only do if they know it’s glSDL behind the
scenes. :slight_smile:

Basically, glSDL should preserve the same semantics as the SDL 2D
API wherever possible.

Yeah, that’s the idea. However, if double buffering is used by
default, there’s another problem: You may get a h/w page flipping
display without asking for it, and that may break applications that
expect a s/w shadow surface. (Partial updates break if you dont know
how many buffers there are, for example.)

Yes, partial updates are probably broken, because they are implemented
using SDL_GL_SwapBuffers() (which means you’ll lose some things if you
call SDL_UpdateRects more than once per frame and have a real “pointer
exchange” double buffer, and not a “blit-copy” double buffer).
Obviously something smart needs to be done here :wink:

The easy cases, and how the should be handled:

SDL_DOUBLEBUF | SDL_HWSURFACE: (dbl buffered h/w surface)
* Set up a double buffered OpenGL context
* Blits to the screen are OpenGL accelerated

SDL_HWSURFACE: (single buffered h/w surface)
* Set up a single buffered OpenGL context
* Blits to the screen are OpenGL accelerated

Then there’s the s/w surface case, where, if I understand it
correctly, applications should be able to assume that what they see
as the display surface is a single shadow surface. glSDL can’t really
handle this properly without basically giving up all acceleration, or
(maybe) through some smart “render everything twice” logic behind the
scenes.

Well, in our case, the video surface is created with all the required
attribues whatever is asked for, then glSDL deals with it by doing
internal conversions. The only exception to this is glSDL always return
an SDL_HWSURFACE
So if the app tries to do direct screen access, the video surface is
locked and the whole screen is glReadPixel()'d and the app is presented
that fake surface. That surface is uploaded to screen when the unlock
happens.
And yes, it seems to me we are allowed to return a SDL_HWSURFACE for the
screen even if we aren’t asked for it, because of this piece from the
SDL_SetVideoMode manpage :

       Note:

          Whatever flags SDL_SetVideoMode could satisfy  are  set  

in the
flags member of the returned surface.

So that returning a SDL_HWSURFACE can be seen as a way of “satisfying”
an SDL_SWSURFACE :slight_smile:
(I’ve never seen programs that weren’t friendly in this regard, i.e.
that do direct pixel access to screen without cheking if it needs locking).

Well, there is an exception: Things work just fine if the OpenGL
driver implements double buffering by blitting from an off-screen
buffer. Unfortunately, there’s no way (AFAIK) to request or find out
when a driver does that, so we have to assume that there are always
two buffers and h/w page flipping… :-/

Now, drivers that support accelerated OpenGL rendering into textures
one way or another would be a different matter…

Well, maybe one day this extension will come to linux, but for now
render to texture is still windows-only.

And now, I have a question : Sam, do you have something against adding a
SDL_GLSDL flag ? There are two purposes for this :

  • backwards compatibility with older glSDL programs (SDL_GLSDL is used
    at video init)
  • if we set this flag for the video surface, this would allow wrappers
    for SDL to recognize glSDL (here I have pygame in mind, because pygame
    whines about the fact that programs are trying to do 2D blitting to an
    SDL_OPENGL screen, so adding that flag could allow pygame to make the
    difference between glSDL and other backends).

Stephane>On Friday 05 March 2004 15.56, Sam Lantinga wrote:

       Note:
          Whatever flags SDL_SetVideoMode could satisfy  are  set  

in the
flags member of the returned surface.

So that returning a SDL_HWSURFACE can be seen as a way of “satisfying”
an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the other way
around.

(I’ve never seen programs that weren’t friendly in this regard, i.e.
that do direct pixel access to screen without cheking if it needs locking).

There are plenty. :slight_smile:
I think this is still okay, since not doing this is a huge performance
penalty, but there are lots of applications which every frame twiddle
all the bits on the screen.

And now, I have a question : Sam, do you have something against adding a
SDL_GLSDL flag ? There are two purposes for this :

  • backwards compatibility with older glSDL programs (SDL_GLSDL is used
    at video init)
  • if we set this flag for the video surface, this would allow wrappers
    for SDL to recognize glSDL (here I have pygame in mind, because pygame
    whines about the fact that programs are trying to do 2D blitting to an
    SDL_OPENGL screen, so adding that flag could allow pygame to make the
    difference between glSDL and other backends).

No, that’s fine if the glSDL backend will be merged with SDL 1.3

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

  • if we set this flag for the video surface, this would allow wrappers
    for SDL to recognize glSDL (here I have pygame in mind, because pygame
    whines about the fact that programs are trying to do 2D blitting to an
    SDL_OPENGL screen, so adding that flag could allow pygame to make the
    difference between glSDL and other backends).

You probably shouldn’t expose the SDL_OPENGL flag if the application didn’t
ask for it…

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Sam Lantinga wrote:

So that returning a SDL_HWSURFACE can be seen as a way of “satisfying”
an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the other way
around.

Ok. As this wasn’t documented, I wasn’t sure…
So, what’s your advice in this case ? Fail if the program doesn’t
request a hardware video surface ? That would really reduce the scope of
glSDL…

[…]

And now, I have a question : Sam, do you have something against adding a
SDL_GLSDL flag ? There are two purposes for this :

  • backwards compatibility with older glSDL programs (SDL_GLSDL is used
    at video init)
  • if we set this flag for the video surface, this would allow wrappers
    for SDL to recognize glSDL (here I have pygame in mind, because pygame
    whines about the fact that programs are trying to do 2D blitting to an
    SDL_OPENGL screen, so adding that flag could allow pygame to make the
    difference between glSDL and other backends).

You probably shouldn’t expose the SDL_OPENGL flag if the application didn’t
ask for it…

There are two things :

  • glSDL doesn’t support OpenGL rendering (because it would mess with
    glSDL blitting) so in the case an application requests SDL_OPENGL,
    setvideomode fails.
  • glSDL uses other backends functions to act as a “portable” backend.
    That is, it uses X init/wm/shutdown code under X, and should (but it’s
    untested ATM) use coresponding windows code under windows, for example.
    So the SDL_OPENGL flag is used to fool other backends code. I’ll see if
    I can remove it, but it might be tricky.

Stephane

OTOH, when an application asks for glSDL using the SDL_GLSDL flag
(as opposed to the user setting the corresponding environment
variable), we can probably assume that the application is “glSDL
aware” - and then it should say SDL_GLSDL | SDL_HWSURFACE.

Or the SDL_HWSURFACE flag could be implied, since glSDL doesn’t really
do anything useful without it anyway… (All glSDL could do would be
to act as an emulator style “OpenGL framebuffer” with all rendering
done in s/w, and only shadow->display blits done by OpenGL. Is that
worth the extra typing and “Why doesn’t SDL_GLSDL provide full
acceleration?” FAQ?)

Either way, one could actually claim that glSDL always gives you a
s/w display surface. That’s because it cannot really provide one at
all, since there’s no portable way of accessing the OpenGL frame
buffer directly. When you lock the glSDL display surface, you get a
software surface. glSDL copies the data back and forth when
(un)locking. (Which is why it’s so expensive. glSDL can’t know which
areas to copy, or if copying in both directions is really needed.)

Basically, if you really care whether the display surface is a h/w or
s/w surface, glSDL is not the right tool for the job. Not much we can
do about that unfortunately. (Except maybe in a few special cases…)
It’s a limitation of the OpenGL API.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Saturday 06 March 2004 13.19, Stephane Marchesin wrote:

Sam Lantinga wrote:

So that returning a SDL_HWSURFACE can be seen as a way of
“satisfying” an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the
other way around.

Ok. As this wasn’t documented, I wasn’t sure…
So, what’s your advice in this case ? Fail if the program doesn’t
request a hardware video surface ? That would really reduce the
scope of glSDL…

David Olofson wrote:

Sam Lantinga wrote:

So that returning a SDL_HWSURFACE can be seen as a way of
“satisfying” an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the
other way around.

Ok. As this wasn’t documented, I wasn’t sure…
So, what’s your advice in this case ? Fail if the program doesn’t
request a hardware video surface ? That would really reduce the
scope of glSDL…

OTOH, when an application asks for glSDL using the SDL_GLSDL flag
(as opposed to the user setting the corresponding environment
variable), we can probably assume that the application is “glSDL
aware” - and then it should say SDL_GLSDL | SDL_HWSURFACE.

Sure, the SDL_GLSDL flag is a way out. However, if applications have to
special-case every backend out there, it’s not easy for them.

Or the SDL_HWSURFACE flag could be implied, since glSDL doesn’t really
do anything useful without it anyway… (All glSDL could do would be
to act as an emulator style “OpenGL framebuffer” with all rendering
done in s/w, and only shadow->display blits done by OpenGL. Is that
worth the extra typing and “Why doesn’t SDL_GLSDL provide full
acceleration?” FAQ?)

Well, there is no shadow surface with glSDL, and I don’t know what would
happen if one was created.
We don’t want a shadow surface for numerous reasons anyway :

  • as you said, this prevents any hardware accelerated rendering, thus
    losing the point of glSDL
  • we have to do format conversion of surfaces most of the time anyway
    (because OpenGL textures are RGB8 or RGBA8 for compatibility with OpenGL
    1.1 reasons)

Either way, one could actually claim that glSDL always gives you a
s/w display surface. That’s because it cannot really provide one at
all, since there’s no portable way of accessing the OpenGL frame
buffer directly. When you lock the glSDL display surface, you get a
software surface.

But then, why would the application want to lock the glSDL display
surface if it thinks it’s a software surface ?
If the app is assured of getting a sw surface, it has no reason to do so.

glSDL copies the data back and forth when
(un)locking. (Which is why it’s so expensive. glSDL can’t know which
areas to copy, or if copying in both directions is really needed.)

Basically, if you really care whether the display surface is a h/w or
s/w surface, glSDL is not the right tool for the job. Not much we can
do about that unfortunately. (Except maybe in a few special cases…)
It’s a limitation of the OpenGL API.

So maybe we could add a flag like “SDL_DONTCAREWARESURFACE” which tells
the library that the application doesn’t care whether the video surface
is sw or hw ?
Because, for now, applications have no way to tell.

Or we expose the backend selection through an API and let applications
decide which backend the like best among the ones available. That might
be a good idea for 1.3.

Stephane>On Saturday 06 March 2004 13.19, Stephane Marchesin wrote:

David Olofson wrote:

Sam Lantinga wrote:

So that returning a SDL_HWSURFACE can be seen as a way of
“satisfying” an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the
other way around.

Ok. As this wasn’t documented, I wasn’t sure…
So, what’s your advice in this case ? Fail if the program doesn’t
request a hardware video surface ? That would really reduce the
scope of glSDL…

OTOH, when an application asks for glSDL using the SDL_GLSDL
flag (as opposed to the user setting the corresponding
environment variable), we can probably assume that the
application is “glSDL aware” - and then it should say SDL_GLSDL |
SDL_HWSURFACE.

Sure, the SDL_GLSDL flag is a way out. However, if applications
have to special-case every backend out there, it’s not easy for
them.

Right. Speaking of which, the SDL_GLSDL flag shouldn’t really be a
flag, but rather use the environment variable based backend selection
API. Not sure if we really should do it that way, but it seems
logically correct to me…

Or the SDL_HWSURFACE flag could be implied, since glSDL doesn’t
really do anything useful without it anyway… (All glSDL could
do would be to act as an emulator style “OpenGL framebuffer” with
all rendering done in s/w, and only shadow->display blits done by
OpenGL. Is that worth the extra typing and “Why doesn’t SDL_GLSDL
provide full acceleration?” FAQ?)

Well, there is no shadow surface with glSDL, and I don’t know what
would happen if one was created.
We don’t want a shadow surface for numerous reasons anyway :

  • as you said, this prevents any hardware accelerated rendering,
    thus losing the point of glSDL
  • we have to do format conversion of surfaces most of the time
    anyway (because OpenGL textures are RGB8 or RGBA8 for compatibility
    with OpenGL 1.1 reasons)

Right, but that’s not the kind of shadow surface I mean. This shadow
surface is the “fake” surface that glSDL/wrapper uses when you lock
the screen. (Kobo Deluxe uses it for screenshots.) It’s only used
when the display surface is locked. Normal glSDL blits to the screen
bypass this surface. Older versions of glSDL/wrapper doesn’t even
have this feature, IIRC. (Or it was just broken. Same net result. :wink:

Either way, one could actually claim that glSDL always gives you
a s/w display surface. That’s because it cannot really provide
one at all, since there’s no portable way of accessing the OpenGL
frame buffer directly. When you lock the glSDL display surface,
you get a software surface.

But then, why would the application want to lock the glSDL display
surface if it thinks it’s a software surface ?
If the app is assured of getting a sw surface, it has no reason to
do so.

Yeah, you’re right… And if it really is a s/w surface,
applications should use SDL_UpdateRect(s)(), so we should be ok
there; just implement that through temporary textures or direct pixel
copying.

glSDL copies the data back and forth when
(un)locking. (Which is why it’s so expensive. glSDL can’t know
which areas to copy, or if copying in both directions is really
needed.)

Basically, if you really care whether the display surface is a h/w
or s/w surface, glSDL is not the right tool for the job. Not much
we can do about that unfortunately. (Except maybe in a few
special cases…) It’s a limitation of the OpenGL API.

So maybe we could add a flag like “SDL_DONTCAREWARESURFACE” which
tells the library that the application doesn’t care whether the
video surface is sw or hw ?
Because, for now, applications have no way to tell.

Well, as long as we deal with applications that are aware of glSDL,
there’s no problem. Users will just see some “Use glSDL acceleration”
switch somewhere, and as the application is glSDL compatible, it
should Just Work™. (That is, the application won’t do lots of s/w
rendering directly into the display surface or other stuff that
causes major trouble.)

It’s more problematic when a user forces a “normal” SDL application to
use glSDL through the environment variable. We can’t do much more
than try to guess what will work best for the majority of
applications. We may as well ignore applications that do tons of
direct display surface access, because we can’t do much about that
anyway. The user will see that performance sucks and/or that
rendernig is incorrect, and give up trying to make the unwilling
application use glSDL.

Or we expose the backend selection through an API and let
applications decide which backend the like best among the ones
available. That might be a good idea for 1.3.

Maybe… I’m not sure if that can be done without substantial changes
in the rendering API (semantics, mostly), but it should be doable one
way or another. It’s definitely possible to do all the typical s/w
rendering tricks and stuff with OpenGL, but you can’t do it the same
way as with 2D APIs, and the current API doesn’t give the backend
enough information to sort it out.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Saturday 06 March 2004 15.24, Stephane Marchesin wrote:

On Saturday 06 March 2004 13.19, Stephane Marchesin wrote:

/ Hi,
/>>/
/>>/ I am trying to put glSDL into my game. It seems to almost work. I
/>>/ use SDL_GLSDL as my only flag when I want OpenGL on startup (a
/>>/ command line argument activates this). I get a window that shows
/>>/ me what I want to see with flicker.
/
You must use SDL_DOUBLEBUF to get double buffering - even on Linux,
in some cases. (Yes, some Linux OpenGL drivers really do complex
clipping and rendering directly into the window if you tell them to.
I think most drivers do this on other platforms.) You should use
double buffering anyway, since the “fake” double buffering you (may)
get otherwise may not provide retrace sync’ed flips.

So, with OpenGL or glSDL, without SDL_DOUBLEBUF, you’re likely to get
either tearing or horrible flickering. Use SDL_DOUBLEBUF at all
times, unless you really know what you’re doing.

The problem is, when I pass SDL_DOUBLEBUF it displays scrambled and garbled. I can’t see anything the way its supposed to be without SDL_GLSDL as the only flag

/ The frames per second is
/>>/ really bad too.
/
Are you sure you get the right OpenGL lib? Do you have accelerated
OpenGL at all?

Or translated; Are SDL applications using OpenGL natively accelerated?
Do any OpenGL applications get acceleration?

Yes, other OpenGL apps get acceleration. I should also mention I’ve tried this on two seperate machines with different video cards with the same or similar results

/ Here is how
/>>/ I blit:
/>>/
/>/ void DrawIMG(SDL_Surface *img, int x, int y)
/>>/ {
/>>/ // SDL_Surface *screen=SDL_GetVideoSurface();
/>>/ SDL_Rect dest;
/>>/ dest.x = x;
/>>/ dest.y = y;
/>>/ SDL_BlitSurface(img, NULL, screen, &dest);
/>>/ }
/>>/
/>>/ SDL_Surface *bg = LoadIMG(“gfx/startupbg.png”);
/>>/ int done=0;
/>>/ while (!done)
/>>/ {
/>>/ SDL_FillRect(screen,0,0);
/>>/ DrawIMG(bg,0,0);
/>>/ SDL_Flip(screen);
/>>/ }
/
That’s fine (with the glSDL wrapper, mostly because it’s not 100% SDL
2D API compliant), but you really should SDL_DisplayFormat() the
image after loading it.

The LoadIMG function does put things through SDL_DisplayFormatAlpha() (everything I load is a PNG)

The glSDL wrapper will cheat and gett full acceleration either way
(but it’ll fail if you start modifying the image as a procedural
texture), but the backend version will have to convert and upload the
the surface for every single blit ==> dog slow rendering, especially
if your OpenGL subsystem doesn’t use DMA for texture uploading.

/> I also draw lines,
/
How? (I’m afraid I have bad news…)

SDL_gfx lines. I could do them myself but I was already using their framerate limiter (not in the above example, so that’s not the fps problem)

/ boxes,
/
I’d strongly recommend using SDL_FillRect() for filled boxes, vertical
and horizontal lines and hollow boxes… (It translates to rathe
efficient OpenGL rendering operations, so short of avoiding some
function call overhead, you can’t do it much faster than that even
with native OpenGL code.)

I’ll look into that.

/and SDL_TTF fonts.
/
Problem: These will have to be handled as procedural surfaces. That
is, render the text into a surface, SDL_DisplayFormat() that, and
blit it to the screen. Don’t update the text surfaces more often than
you have to, because the uploading can be very expensive on some
systems, even if the rest of the OpenGL subsystem is very fast.

I’ll test it without any text to see if that fixes anything

My main problem is that I get what I’m supposed to see, then I get whatever is behind the window (the desktop) then the next frame, etc. With SDL_DOUBLEBUF, I get garbage, but the window doesn’t show what’s behind it. So I need to know what flags to pass and how exactly to make it work. My opengl libs are VC6 libs, and I use VC6 to compile. The source to said game can be found via CVS here: http://fftrader.sf.net/

Boxes and lines I am sure can be drawn like you said, and I will definitely do that if I have to. Please inform me of how to use glSDL properly with this game! :slight_smile:

-TomT64On Wednesday 03 March 2004 10.16, TomT64 wrote:

[ yes, this old thread is coming back :slight_smile: ]

Sam Lantinga wrote:

      Note:
         Whatever flags SDL_SetVideoMode could satisfy  are  set  

in the
flags member of the returned surface.

So that returning a SDL_HWSURFACE can be seen as a way of “satisfying”
an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the other way
around.

How should we handle SDL_SWSURFACE ? Are we allowed to fail setvideomode
? Should we give dog slow rendering (that’s all glSDL can do with a
SDL_SWSURFACE video surface, as we have to upload the whole surface to
video every frame) ?

Stephane

We could just emulate an OpenGL framebuffer, right…? (Can actually
be faster than the alternatives for pure s/w rendering, though the
real win is when you also use OpenGL for scaling, as many emulators
do.)

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Sunday 11 April 2004 13.14, Stephane Marchesin wrote:

[ yes, this old thread is coming back :slight_smile: ]

Sam Lantinga wrote:

      Note:


         Whatever flags SDL_SetVideoMode could satisfy  are 

set in the
flags member of the returned surface.

So that returning a SDL_HWSURFACE can be seen as a way of
“satisfying” an SDL_SWSURFACE :slight_smile:

No, SDL_HWSURFACE can fall back to SDL_SWSURFACE, but not the
other way around.

How should we handle SDL_SWSURFACE ? Are we allowed to fail
setvideomode ? Should we give dog slow rendering (that’s all glSDL
can do with a SDL_SWSURFACE video surface, as we have to upload the
whole surface to video every frame) ?

Hi!

I saw that SDL is ported to symbian also, the bad news is, it can be compiled
as a static library. That means it cannot be used in commercial applications?

thanks in advance,
Szasz Pal

David Olofson wrote:

How should we handle SDL_SWSURFACE ? Are we allowed to fail
setvideomode ? Should we give dog slow rendering (that’s all glSDL
can do with a SDL_SWSURFACE video surface, as we have to upload the
whole surface to video every frame) ?

We could just emulate an OpenGL framebuffer, right…?

Of course, that’s what I’m thinking about. But just like every
programmer, I’m lazy, so I prefer asking before doing some work :slight_smile:
What I had in mind is that, maybe there is some trick to fallback to
another driver, or something like this…

(Can actually
be faster than the alternatives for pure s/w rendering, though the
real win is when you also use OpenGL for scaling, as many emulators
do.)

After writing the glscale backend, I did some benchmarks about that, and
on a 433 Mhz celeron with a TNT2/AGP, the maximum screen size at which
the game stays playable is 640x480. Anything above that is too slow (in
my view, too slow is under 15 fps). Of course, on more recent machines
that’s less of a problem…

However, I’m not sure how uploading the whole OpenGL frame can be faster
than pure s/w rendering.

Stephane>On Sunday 11 April 2004 13.14, Stephane Marchesin wrote:

David Olofson wrote:

How should we handle SDL_SWSURFACE ? Are we allowed to fail
setvideomode ? Should we give dog slow rendering (that’s all
glSDL can do with a SDL_SWSURFACE video surface, as we have to
upload the whole surface to video every frame) ?

We could just emulate an OpenGL framebuffer, right…?

Of course, that’s what I’m thinking about. But just like every
programmer, I’m lazy, so I prefer asking before doing some work :slight_smile:

Nice habit! hehe

What I had in mind is that, maybe there is some trick to fallback
to another driver, or something like this…

That would be your glscale backend, I guess.

(Can actually
be faster than the alternatives for pure s/w rendering, though the
real win is when you also use OpenGL for scaling, as many
emulators do.)

After writing the glscale backend, I did some benchmarks about
that, and on a 433 Mhz celeron with a TNT2/AGP, the maximum screen
size at which the game stays playable is 640x480. Anything above
that is too slow (in my view, too slow is under 15 fps). Of course,
on more recent machines that’s less of a problem…

However, I’m not sure how uploading the whole OpenGL frame can be
faster than pure s/w rendering.

I can think of two cases:

1. An SMP machine and an OpenGL driver that can do
   async. texture uploading in a separate thread.

2. OpenGL drivers that implement DMA transfers, so
   that texture uploading is asynchronous and/or
   faster than CPU driven uploading.

Both cases imply that direct s/w rendering would still update most or
all of the screen every frame, as is the case in many scrolling games

  • and probably quite a few games really should use some form of
    “smart updating”. (Study my Fixed Rate Pig example and you’ll realize
    why people don’t do it unless they really have to…)

Unfortunately, I’m afraid both cases are rather unusual on the
platforms that would need them the most, such as Linux. :-/

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 13 April 2004 02.45, Stephane Marchesin wrote:

On Sunday 11 April 2004 13.14, Stephane Marchesin wrote: