glSDL backend

No problem with testgl.
I am testing glSDL, but it seems that I’m not helping much. What I am
supposed to do?

Stephane Marchesin wrote:> Lucas Clemente Vella wrote:

Stephane Marchesin wrote:

Well, considering that glSDL doesn’t change the x11 backend, there is
probably another problem.
If you compile it but don’t set the environment variable, does it work ?

I give up on blobwars. Even with unpatched SDL, I need to restart my X11
once to get it working (I have no idea why), but after restarting X11,
it works with unpatched SDL and with glSDL (withoud sound in both, I
belive it is a bug of my version of blobwars with SDL-1.2.8).

Well, sometimes using glSDL makes bug appear in other software.
Usually programs that assume specific pixel formats or such.

Now, with nq-sdl(software render), form QuakeForge, worked with no
problem without setting SDL_VIDEODRIVER, setting it to glSDL, dga and
x11.
With nq-sgl (OpenGL), without setting SDL_VIDEODRIVER, I got almost
the same thing that I got with Neverwinter Nights:

NWN:
X Error of failed request: BadValue (integer parameter out of range
for operation)
Major opcode of failed request: 89 (X_StoreColors)
Value in failed request: 0xffffffff
Serial number of failed request: 28
Current serial number in output stream: 29

nq-sgl:
X Error of failed request: BadValue (integer parameter out of range
for operation)
Major opcode of failed request: 89 (X_StoreColors)
Value in failed request: 0xffffffff
Serial number of failed request: 29
Current serial number in output stream: 30

Please try with programs that are known to handle the failure
gracefully, like for example testgl from the SDL source distribution. As
I said, not all programs do that :slight_smile:

Here is what I get :
$ export SDL_VIDEODRIVER=glSDL
$ ./testgl
glSDL videoinit
Couldn’t set GL mode:

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Lucas Clemente Vella
@Lucas_Clemente_Vella

Lucas Clemente Vella wrote:

No problem with testgl.
I am testing glSDL, but it seems that I’m not helping much. What I am
supposed to do?

Well, try some SDL software (preferably one that has source code
available) and find problems.
Then, track the problem. This can be the tough part since you can find
bugs in SDL, glSDL or the software (and that’s where having the source
is really useful).
If you don’t want to track the bugs, you can just report software that
doesn’t work.

Stephane

Murlock wrote:

Stephane Marchesin wrote:

Video driver should be “glSDL”, not “windib”.

Stephane

Arff… :wink:

It’s OK, slower than my other filters (eagle, sai2x, …) but my game
use a lot of small SDL_UpdateRect so I suppose that the time to upload
thoses to a new opengl texture consumes much times…

SDL_UpdateRect is a glFinish() call when in single buffer mode. So use
as few as possible.
That said, your performance problem is probably something else. I think
most potential problems are described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html

Otherwise, is it possible to use the capacities of OpenGl to change
the size of viewport without change of logical size of software ?

Yes, I’ve implemented that as another backend :
http://icps.u-strasbg.fr/~marchesin/sdl/sdl_glscale.patch
Don’t forget to set SDL_VIDEODRIVER=glscale :wink:
and if you want to specify the size, set SDL_VIDEODRIVER_GLSCALE_X and
SDL_VIDEODRIVER_GLSCALE_Y to the size you want. It scales by a factor of
2 by default.

Stephane

Hello everybody !

I just run the patch, and it runs quite fine when using correctly. I’ve got
only one question. I’m sure that i could answer myself if i knew better the
internals of OpenGL. Maybe this question has been posted and already
posted, in this case i’m sorry and i’ll look for my glasses :slight_smile:

Until now, i blit between hardware surface and “active frame buffer” (set
via SDL_SetVideoMode) very fast thanx to glSDL backend, so i wonder if it
is possible to beneficiate of this acceleration with two hardware surface
create via SDL_CreateRGB and blit between them. I’ve made some tests, and
it doesn’t look to work. Is this the normal behaviour ? Or i miss
someting ?

Thanx !

[…]

Until now, i blit between hardware surface and “active frame
buffer” (set via SDL_SetVideoMode) very fast thanx to glSDL
backend, so i wonder if it is possible to beneficiate of this
acceleration with two hardware surface create via SDL_CreateRGB and
blit between them. I’ve made some tests, and it doesn’t look to
work. Is this the normal behaviour ? Or i miss someting ?

This is a limitation of OpenGL. You can’t perform accelerated
operations between textures - at least not without relying on
extensions that are available only on some platform/driver/hardware
combinations.

We’ll try to make the best use of whatever is available in future
versions, but we’ll have to make it work correctly and reliably
everywhere, before we move on to optimizations, and driver/hardware
specific optimizations in particular.

Blitting to the screen is what glSDL focuses on at this point, since
it’s the only part of real time rendering that you can’t avoid no
matter what. Off-screen rendering - if you have any at all - can
usually be done in advance, when installing the game, or when loading
levels.

Even games that use vector objects, pixel effects, complex compound
sprites and the like can sometimes make good use of glSDL, since
rendering into glSDL “h/w” surfaces (actually s/w shadow surfaces
since OpenGL cannot give you access to texture RAM) is very fast, and
uploading textures to the video card and then rendering with OpenGL
isn’t much slower than rendering directly into VRAM. Actually, if the
OpenGL driver uses DMA for uploading, it’s about the fastest path
there is from the CPU to VRAM - and either way, alpha blending with
glSDL comes at very little, if any, extra cost.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 07 January 2005 08.48, Evadream wrote:

Wow.

Thanx for this real great answer !

glSDL is a wonderful project. Thank you very much for all the work you’ve
done with Stephane Marchesin.

Ok, I finally took the time to test this.

The patch applied cleanly and the two games I tried worked (but very
slowly at first – something like 3fps).

I guess this is normal (and due to my system) but every time I run a
program with the glSDL backend, I get the following message:
Xlib: extension “XFree86-DRI” missing on display “:0.0”.
(just below the “glSDL videoinit” message)

Concerning my own game, it worked slowly too until I replaced all the
SDL_SWSURFACE with SDL_HWSURFACE. After I had done that, it worked at
a pretty decent speed.

I found one problem though: It seems like, on surfaces with a
colorkey, calling SDL_SetAlpha without the SDL_SRCALPHA flag has no
effect (instead of setting the surface as opaque).

I use the following code:

int flags = Surface->flags & SDL_RLEACCEL;

if (alpha<255)			// (*)
	flags |= SDL_SRCALPHA;

if (SDL_SetAlpha(Surface, flags, alpha)) {
	fprintf(stderr, "SetAlpha failed: %s\n", SDL_GetError());
	exit(1);
}

When I use an alpha of 255 with the above code, the sprite isn’t
opaque as it should.

If I comment the (*) line or if I don’t use colorkey with my sprites,
the problem disappears. As far as my tests go, the problem seems
unrelated to the RLE encoding (happens whether the surfaces are
encoded or not).

I hope this helps…

And now I got some questions about hardware surfaces. I never bothered
to use them since I read somewhere they can be slower than software
surfaces in some cases. Since I’d like get the best of both worlds, is
it okay to just test if we got an hardware surface for the screen, and
if it’s the case use hardware surfaces everywhere, and if it’s not use
software surface everywhere? Or are there other pitfalls with hardware
surfaces?

For info, most of my surfaces have “static” contents.

-Gaetan.

And now I got some questions about hardware surfaces. I never bothered
to use them since I read somewhere they can be slower than software
surfaces in some cases. Since I’d like get the best of both worlds, is
it okay to just test if we got an hardware surface for the screen, and
if it’s the case use hardware surfaces everywhere, and if it’s not use
software surface everywhere? Or are there other pitfalls with hardware
surfaces?

Using hardware surfaces probably will be much slower when you do much surface <-> surface blitting, which AFAIR in OpenGL (in our case: glSDL) must* be implemented using glCopyPixels(), which is really slow operation if you’re blitting SWSURFACE’s to HWSURFACE since CPU gets involved (?) in sending data from system memory to video card, also AGP is not that fast etc. bottlenecks. So, in this case it’s much better to blit SWSURFACE <-> SWSURFACE, and made all surfaces that won’t be blitted on anything other than framebuffer, HWSURFACE, so that they would need to be only once sent to video card (but blitting HWSURFACE <-> HWSURFACE probably would be much faster, because CPU is not involved in this task, no data is sent through AGP, only video card has workout - and in normal 2D games it’s not working very hard :-))

Hmmm, I assumed that SWSURFACE’s exist in system memory, and hardware ones, in video card’s. If it’s not true with glSDL (as I’m thinking now) then forgot what I’ve written :-]

    • but there are some extensions, pbuffers etc. which can help.

Koshmaar

[…]

Hmmm, I assumed that SWSURFACE’s exist in system memory, and
hardware ones, in video card’s. If it’s not true with glSDL (as I’m
thinking now) then forgot what I’ve written :-]

glSDL is a bit odd in that regard. All surfaces (except for the
display surface) are basically s/w surfaces. A so called h/w surface
in glSDL is effectively a s/w surface with one or more textures bound
to it. When you lock/modify/unlock such a surface, glSDL invalidates
the texture(s), so that they’re re-uploaded before they’re used
again. Blitting from one surface to another is just a s/w ==> s/w
blit, followed by an invalidation of the target surface’s textures.

That is;

  • surface ==> surface blits are fast (all in system RAM),
    however…

  • …surface ==> surface blits cannot be accelerated by
    glSDL, so blending and stuff still relies on the CPU.

  • surface ==> display blits are extremely fast, except
    possibly the first blit after a texture invalidation.

  • display ==> surface and display ==> display blits can
    be pretty slow, since they usually involve CPU driven
    transfers to/from VRAM. (Driver dependent.)

  • Modifying the display surface directly is extremely
    slow, since glSDL will have to read all pixels from
    VRAM when you lock, and write them all back when you
    unlock. It doesn’t help that there is no API for
    locking only part of a surface. (Sam threw in an
    extension for that in some version, but it was backed
    out again for several reasons. I believe one of them
    is that this feature is completely irrelevant to most
    backends, and is therefore hard to use correctly. Also,
    it’s safer, more accurate and more efficient to do it
    on the application side.)

  • Unlike the DDraw backend, glSDL does not lose surfaces.
    It may lose textures, but you’ll never know, as they’re
    re-uploaded as soon as you blit to the screen again.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 07 January 2005 18.20, Koshmaar wrote:

Gaetan de Menten wrote:

Ok, I finally took the time to test this.

The patch applied cleanly and the two games I tried worked (but very
slowly at first – something like 3fps).

I guess this is normal (and due to my system) but every time I run a
program with the glSDL backend, I get the following message:
Xlib: extension “XFree86-DRI” missing on display “:0.0”.
(just below the “glSDL videoinit” message)

You don’t seem to have 3d acceleration correctly setup. You should load
the dri module in your X configuration file.

Concerning my own game, it worked slowly too until I replaced all the
SDL_SWSURFACE with SDL_HWSURFACE. After I had done that, it worked at
a pretty decent speed.

Well, if you configure accelerated OpenGL on your machine, you might get
a bigger speedup.
For now, it is probably falling back to software rendering.

I found one problem though: It seems like, on surfaces with a
colorkey, calling SDL_SetAlpha without the SDL_SRCALPHA flag has no
effect (instead of setting the surface as opaque).

I use the following code:

int flags = Surface->flags & SDL_RLEACCEL;

if (alpha<255) // (*)
flags |= SDL_SRCALPHA;

if (SDL_SetAlpha(Surface, flags, alpha)) {
fprintf(stderr, “SetAlpha failed: %s\n”, SDL_GetError());
exit(1);
}

When I use an alpha of 255 with the above code, the sprite isn’t
opaque as it should.

If I comment the (*) line or if I don’t use colorkey with my sprites,
the problem disappears. As far as my tests go, the problem seems
unrelated to the RLE encoding (happens whether the surfaces are
encoded or not).

The semantics when mixing colorkey and alpha is explained in large
detail in the SDL_SetAlpha manpage.
It depends on the format of the source and destination surfaces, not
only on the flags that are set.

I hope this helps…

And now I got some questions about hardware surfaces. I never bothered
to use them since I read somewhere they can be slower than software
surfaces in some cases. Since I’d like get the best of both worlds, is
it okay to just test if we got an hardware surface for the screen, and
if it’s the case use hardware surfaces everywhere, and if it’s not use
software surface everywhere? Or are there other pitfalls with hardware
surfaces?

I think we have most of the pitfalls described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html
And yes, most of the advice there also applies to non-glSDL situations.

Stephane

Gaetan de Menten wrote:

I guess this is normal (and due to my system) but every time I run a
program with the glSDL backend, I get the following message:
Xlib: extension “XFree86-DRI” missing on display “:0.0”.
(just below the “glSDL videoinit” message)

You don’t seem to have 3d acceleration correctly setup. You should load
the dri module in your X configuration file.

I got the Nvidia closed source drivers and they explicitly say not to
load the dri module, but I guess this is pretty much off topic…

I found one problem though: It seems like, on surfaces with a
colorkey, calling SDL_SetAlpha without the SDL_SRCALPHA flag has no
effect (instead of setting the surface as opaque).

I use the following code:

  int flags = Surface->flags & SDL_RLEACCEL;

  if (alpha<255)                  // (*)
          flags |= SDL_SRCALPHA;

  if (SDL_SetAlpha(Surface, flags, alpha)) {
          fprintf(stderr, "SetAlpha failed: %s\n", SDL_GetError());
          exit(1);
  }

When I use an alpha of 255 with the above code, the sprite isn’t
opaque as it should.

If I comment the (*) line or if I don’t use colorkey with my sprites,
the problem disappears. As far as my tests go, the problem seems
unrelated to the RLE encoding (happens whether the surfaces are
encoded or not).

The semantics when mixing colorkey and alpha is explained in large
detail in the SDL_SetAlpha manpage.
It depends on the format of the source and destination surfaces, not
only on the flags that are set.

I think you misunderstood what I said… I was probably unclear but I
really think there is a problem… The semantics are what I thought
they were, and even if I got the semantics wrong (I don’t think I do),
you’ll have to explain me how a call to “SDL_SetAlpha(Surface, 0,
255)” can result in an alpha blended sprite…

Again, it looks like the call is totally ignored (the previous alpha
value is used) when
the SDL_SRCALPHA is not present, instead of IN THIS CASE (alpha = 255)
“setting up” the surface as opaque.

Btw: this problem is, of course, only present with the glSDL backend
and not in any other backend I tried (otherwise I wouldn’t have
reported it as an answer to your mail).

[…]
Or are there other pitfalls with hardware surfaces?

I think we have most of the pitfalls described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html
And yes, most of the advice there also applies to non-glSDL situations.

I had already read that page since you mentioned it in your first
mail… (it’s a pretty nice page, btw). But I think some pitfalls of
hardware surfaces are not mentioned. Isn’t there a problem with lost
surfaces on the DirectDraw backend (or something like that)?

Regards,
Gaetan.On Sun, 09 Jan 2005 23:26:29 +0100, Stephane Marchesin <stephane.marchesin at wanadoo.fr> wrote:

Gaetan de Menten wrote:

You don’t seem to have 3d acceleration correctly setup. You should load
the dri module in your X configuration file.

I got the Nvidia closed source drivers and they explicitly say not to
load the dri module, but I guess this is pretty much off topic…

Hmm, I assumed that was the DRI drivers. Anyway as you said, it’s off topic.

The semantics when mixing colorkey and alpha is explained in large
detail in the SDL_SetAlpha manpage.
It depends on the format of the source and destination surfaces, not
only on the flags that are set.

I think you misunderstood what I said… I was probably unclear but I
really think there is a problem… The semantics are what I thought
they were, and even if I got the semantics wrong (I don’t think I do),
you’ll have to explain me how a call to “SDL_SetAlpha(Surface, 0,
255)” can result in an alpha blended sprite…

Again, it looks like the call is totally ignored (the previous alpha
value is used) when
the SDL_SRCALPHA is not present, instead of IN THIS CASE (alpha = 255)
“setting up” the surface as opaque.

Btw: this problem is, of course, only present with the glSDL backend
and not in any other backend I tried (otherwise I wouldn’t have
reported it as an answer to your mail).

Ok, Id’ like to have the full source code reproducing the problem
(including the surface creation code, that’s important).
It would be nice if it could compile out of the box, because I don’t
have too much free time currently :slight_smile:

I think we have most of the pitfalls described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html
And yes, most of the advice there also applies to non-glSDL situations.

I had already read that page since you mentioned it in your first
mail… (it’s a pretty nice page, btw). But I think some pitfalls of
hardware surfaces are not mentioned. Isn’t there a problem with lost
surfaces on the DirectDraw backend (or something like that)?

Well, DirectDraw surely causes problems :wink: but :

  • if the surface is lost, it can very well be recreated exactly as it
    was before (in particular, it can be recreated as a hardware surface) by
    reloading the artwork. This is clearly stated in the directdraw docs.
  • losing surfaces only happens during long program inactivity or when
    you alt-tab a fullscreen apps and such…

So in short, that’s not a performance problem (we’re talking about
performace, aren’t we ?).

Stephane

Ok, Id’ like to have the full source code reproducing the problem
(including the surface creation code, that’s important).
It would be nice if it could compile out of the box, because I don’t
have too much free time currently :slight_smile:

Attached is a modified (simplified) testalpha.c (from the test
directory in SDL distribution) which exhibit the problem. While doing
it, I noticed two more things:

  1. the problem is only present if both surfaces are hardware surfaces.
  2. the call to SDL_SetAlpha(sprite, 0, 255) is not simply ignored, as
    I thought (and said previously). In the attached example, it seems
    like an alpha value of approximately 128 is used, instead of the 255
    I’m expecting.

Note that you’ll have to run the program with "./testalpha -hw"
otherwise the problem doesn’t show up (see remark 1 above)

Btw: Sorry for the slow answer but I’ve been quite busy too for the last days.

I think we have most of the pitfalls described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html
And yes, most of the advice there also applies to non-glSDL situations.

I had already read that page since you mentioned it in your first
mail… (it’s a pretty nice page, btw). But I think some pitfalls of
hardware surfaces are not mentioned. Isn’t there a problem with lost
surfaces on the DirectDraw backend (or something like that)?

Well, DirectDraw surely causes problems :wink: but :

  • if the surface is lost, it can very well be recreated exactly as it
    was before (in particular, it can be recreated as a hardware surface) by
    reloading the artwork. This is clearly stated in the directdraw docs.
  • losing surfaces only happens during long program inactivity or when
    you alt-tab a fullscreen apps and such…

So in short, that’s not a performance problem (we’re talking about
performace, aren’t we ?).

Hmmm, no… :slight_smile: My question was meant to be general. In short, I’d like
to know what are the changes I should make to my code for it to be
hardware-surface friendly.

-Gaetan.
-------------- next part --------------
A non-text attachment was scrubbed…
Name: testalpha.c
Type: text/x-csrc
Size: 4872 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20050115/22000105/attachment.cOn Tue, 11 Jan 2005 19:11:18 +0100, Stephane Marchesin <stephane.marchesin at wanadoo.fr> wrote:

Awesome. Great article. Great work, too. This glSDL
backend will move SDL into the forefront of graphics
technology.

Paul LoweOn Thursday 23 December 2004 04:09 pm, Stephane Marchesin wrote:

Hi,

We (David Olofson and me) finally got the glSDL backend
into a state that works and has recieved enough testing
for a first beta. The resulting patch is there (not
posted to the list because of the mailing list size
filter) :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl-final.patch

This patch also includes :

  • the hardware accelerated alpha blits fix discussed some
    months ago - an internal change to the semantics of the
    SDL_OPENGL flag : backends now use SDL_INTERNALOPENGL to
    tell the difference between an OpenGL mode and a normal
    mode. This incurs no change to the applications which
    still use and query the SDL_OPENGL flag. The SDL_OPENGL
    flag previously meant two things : “the window is handled
    by OpenGL” and “the application uses an OpenGL window”,
    and for a glSDL backend we needed to split this. Now it
    simply means “the application uses an OpenGL window” and
    SDL_INTERNALOPENGL means “the window is handled by
    OpenGL”.

Testing is welcome of course !

When running stock applications, those might be veeeery
slow. Application tuning for performance in general and
for glSDL in particular is described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Does the current glSDL allow for hardware-accelerated blitting between
surfaces in video memory ? la pbuffers or some clever use of
glTexSubImage2D or similar OpenGL functionality?On Thursday 23 December 2004 04:09 pm, Stephane Marchesin wrote:

Hi,

We (David Olofson and me) finally got the glSDL backend
into a state that works and has recieved enough testing
for a first beta. The resulting patch is there (not
posted to the list because of the mailing list size
filter) :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl-final.patch

This patch also includes :

  • the hardware accelerated alpha blits fix discussed some
    months ago - an internal change to the semantics of the
    SDL_OPENGL flag : backends now use SDL_INTERNALOPENGL to
    tell the difference between an OpenGL mode and a normal
    mode. This incurs no change to the applications which
    still use and query the SDL_OPENGL flag. The SDL_OPENGL
    flag previously meant two things : “the window is handled
    by OpenGL” and “the application uses an OpenGL window”,
    and for a glSDL backend we needed to split this. Now it
    simply means “the application uses an OpenGL window” and
    SDL_INTERNALOPENGL means “the window is handled by
    OpenGL”.

Testing is welcome of course !

When running stock applications, those might be veeeery
slow. Application tuning for performance in general and
for glSDL in particular is described there :
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Donny Viszneki wrote:

Does the current glSDL allow for hardware-accelerated blitting
between surfaces in video memory ? la pbuffers or some clever use of
glTexSubImage2D or similar OpenGL functionality?

no.

clemens