glSDL and Accelerated Alpha Blending

Hello!

Does anyone here know if it is possible to do hardware accelerated image
blending using glSDL? In particular, I am trying to overlay a set of
images w/ alpha channels onto a transparent surface. The general idea
is that I could do some off-screen compositing, and then blit the result
to the screen as if each individual image were directly blitted to the
screen. I’ve been able to do this in software, however things are
starting to get slow and I was hoping to offload this work onto the
user’s gfx card.

It’s my understanding that SDL does not directly support alpha belnding,
whereby the destination image’s alpha channel is modified. What I was
hoping is that there was a way, perhaps using glSDL, to hack this
feature in.

Take care!–
David Ludwig
davidl at funkitron.com
http://www.funkitron.com

Does anyone here know if it is possible to do hardware-accelerated alpha
blending via SDL? In particular, I’m looking to overlay two images with
alpha channels onto a transparent surface, and then draw that to the
screen. It’s my understanding that SDL does not support this case,
however using SDL + OpenGL does. What I’m hoping to avoid is

Hello!

Does anyone here know if it is possible to do hardware accelerated
image blending using glSDL?

Same deal as with native OpenGL without extensions; you can blend to
the screen, but you can’t render into textures. glSDL uses SDL’s s/w
blitters + texture (re)uploading to emulate that.

In particular, I am trying to overlay a set of
images w/ alpha channels onto a transparent surface. The general
idea is that I could do some off-screen compositing, and then blit
the result to the screen as if each individual image were directly
blitted to the screen. I’ve been able to do this in software,
however things are starting to get slow and I was hoping to offload
this work onto the user’s gfx card.

If you can’t do it in software, you’re going to need hardware
acceleration anyway - and if you go down that route, why not just
keep it simple and just blit everything directly to the screen?

It’s my understanding that SDL does not directly support alpha
belnding, whereby the destination image’s alpha channel is modified.

That’s correct, AFAIK. Alpha can be applied, copied or ignored, but
not combined.

What I was hoping is that there was a way, perhaps using glSDL, to
hack this feature in.

Well, glSDL is supposed to accelerate the SDL API; nothing more,
nothing less - so if you can’t do it with the SDL blitters, you can’t
do it with glSDL. Besides, as I said, glSDL can’t even accelerate all
operations; only blits to the screen.

So, do you need this specific feature - or do you just need to blend
lots of surfaces to the screen? :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Wednesday 29 June 2005 00.20, David Ludwig wrote:

David Olofson wrote:

Does anyone here know if it is possible to do hardware accelerated
image blending using glSDL?

Same deal as with native OpenGL without extensions; you can blend to
the screen, but you can’t render into textures. glSDL uses SDL’s s/w
blitters + texture (re)uploading to emulate that.

Are these OpenGL extensions fairly common? I have a gut feeling that
they’re only available on more recent cards, but am not sure of that (yet.)

In particular, I am trying to overlay a set of
images w/ alpha channels onto a transparent surface. The general
idea is that I could do some off-screen compositing, and then blit
the result to the screen as if each individual image were directly
blitted to the screen. I’ve been able to do this in software,
however things are starting to get slow and I was hoping to offload
this work onto the user’s gfx card.

If you can’t do it in software, you’re going to need hardware
acceleration anyway - and if you go down that route, why not just
keep it simple and just blit everything directly to the screen?

In some of the games I’m working on, a good percentage of the images are
composed at runtime. For a long while, it’s been safe to assume that
these compositions would only be done at certain points, such as when a
level loads, however more and more cases are coming up where on-the-fly
image compositing would be handy. (Not altogether necessary, but
definately a big “want”.)

Well, glSDL is supposed to accelerate the SDL API; nothing more,
nothing less - so if you can’t do it with the SDL blitters, you can’t
do it with glSDL. Besides, as I said, glSDL can’t even accelerate all
operations; only blits to the screen.

I suppose what I’m hoping to find is a way to hook into glSDL. To start
out with, glSDL would handle all blits, and it’d be assumed that only
direct-to-screen blits could be accelerated. Image compositions would
continue to be handled in software, and only done sporadically at key
points (to keep the system-to-video memory copies low, among other
reasons.) What I’m hoping is possible is that a custom “blitter” could
be written (eventually) that would let the video card accelerate these
compositions (if the card and driver allowed it.) glSDL would still
perform all SDL-compliant blits, and the custom “blitter” would sit
alongside it.–
David Ludwig
davidl at funkitron.com
http://www.funkitron.com

David Ludwig wrote:

David Olofson wrote:

Does anyone here know if it is possible to do hardware accelerated
image blending using glSDL?

Same deal as with native OpenGL without extensions; you can blend to
the screen, but you can’t render into textures. glSDL uses SDL’s s/w
blitters + texture (re)uploading to emulate that.

Are these OpenGL extensions fairly common? I have a gut feeling that
they’re only available on more recent cards, but am not sure of that
(yet.)

Here is the OpenGL Extension registry. About 350 extensions in it and
they have been around since (if memory serves) around the time of the
TNT cards. Most if not all cards in production today have some
extensions in them.

http://oss.sgi.com/projects/ogl-sample/registry/

HTH
Richard

[…]

What I’m hoping is possible is that a custom “blitter” could
be written (eventually) that would let the video card accelerate
these compositions (if the card and driver allowed it.) glSDL would
still perform all SDL-compliant blits, and the custom "blitter"
would sit alongside it.

If you’re going to write OpenGL + extension specific blitters and
stuff, wouldn’t you be better off placing the “cut” a bit higher, and
implement two mid-level backends for your game instead? One for plain
SDL (and glSDL; extra bonus), and one for native OpenGL.

Though this particular idea seems doable, it’s just one of many things
that “would be nice to have.” Indeed, it would probably be handy if
glSDL could do everything and be everything, so you can have all the
advantages of OpenGL, without losing the ability to run with other
backends if need be - but as things aren’t hooked up as trivially as
one might think, complexity might explode very quickly.

Have a look at some (usually older) games that provide both
accelerated and s/w rasterizers… Plenty of code, and still usually
rather specialized and game specific. (Why optimize the 5132434657
situations that never occur in the game? :slight_smile: That’s the easy way;
basically what I’m suggesting above; do only what you need, and
optimize as needed, for your game. A generic solution that delivers
any kind of performance with s/w rendering, without crippling the
accelerated backend(s) seems, well, unrealistic…

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Thursday 30 June 2005 21.59, David Ludwig wrote:

Richard J Hancock a ?crit :

David Ludwig wrote:

David Olofson wrote:

Does anyone here know if it is possible to do hardware accelerated
image blending using glSDL?

Same deal as with native OpenGL without extensions; you can blend to
the screen, but you can’t render into textures. glSDL uses SDL’s s/w
blitters + texture (re)uploading to emulate that.

Are these OpenGL extensions fairly common? I have a gut feeling that
they’re only available on more recent cards, but am not sure of that
(yet.)

Here is the OpenGL Extension registry. About 350 extensions in it and
they have been around since (if memory serves) around the time of the
TNT cards. Most if not all cards in production today have some
extensions in them.
http://oss.sgi.com/projects/ogl-sample/registry/

Except some extensions that would be needed (namely render to texture)
don’t have glX counterparts right now.

Not to mention that we would need to write platform-dependent code to
support them when available.

Stephane

Stephane Marchesin wrote:

Richard J Hancock a ?crit :

Here is the OpenGL Extension registry. About 350 extensions in it
and they have been around since (if memory serves) around the time of
the TNT cards. Most if not all cards in production today have some
extensions in them.
http://oss.sgi.com/projects/ogl-sample/registry/

Except some extensions that would be needed (namely render to texture)
don’t have glX counterparts right now.

Not to mention that we would need to write platform-dependent code to
support them when available.

Stephane

This seems to be a case (doing alpha composition[1]) that SDL, by
design, doesn’t handle. What I’m particularly curious about is if SDL
ought to handle these cases. If so, how should they be handled? Should
it be done through SDL_BlitSurface, perhaps using a flag of some sort,
or should a separate function be made available, such as
SDL_CompositeSurface? Perhaps such functionality should be available in
an extension library of some sort, in which case image compositing could
be handled through something like SDLEXT_CompositeSurface.

Anyhoo, the main reason I’ve been thinking about this is that I’m going
to have to write some hardware acceleration code for a 2d game engine.
It has to be able to handle a few cases that SDL by-itself cannot
(rotation, scaling, composition (maybe.)) My current thought is to try
and make it as SDL friendly as possible, as the game engine’s code
already uses SDL extensively. Unfortunately, however, it’s just about a
given that it’ll have to go through the Direct3D API [2], which means
that I’ll likely be writing a new video backend for SDL (or possibly
modify the DirectDraw one.) In this case, I’d like to make it powerful
enough to handle special cases (rotation, scaling, composition, etc.),
but to try and leave the SDL API alone as much as possible. Any
additional functionality would be made available through a separate set
of interfaces. What I’m wondering now is:

  • How much Direct3D code can and/or should be placed into an SDL video
    backend? Should this portion even be written as an SDL video backend?
  • How should hardware-accelerated functionality not handleable by the
    current SDL API be presented to the user/programmer? Should
    SDL_BlitSurface be modified? Should there be functions like
    SDLEXT_CompositeImage() or SDLEXT_BlitRotatedImage()?

[1] By “alpha composition”, I’m referring to blending one image on top
of another whereby the destination image’s alpha channel becomes a
combination of it’s prior self and the source image’s.

[2] Under Windows, hardware-accelerated Direct3D drivers are more
readily available/pre-installed than OpenGL ones.–
David Ludwig
davidl at funkitron.com