About hw alpha blending timetable

Hi!

Is there any timetable or plan for hardware alpha blending support in SDL?
Well, there is, but any knowledge about when it would be released(ie. in
a snapshot)

Thanks!

Hi!

Is there any timetable or plan for hardware alpha blending support in SDL?
Well, there is, but any knowledge about when it would be released(ie. in
a snapshot)

No timetable, just one of many ideas on the SDL 1.3 plate.
Keep in mind that even if added, the majority of targets won’t have it.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Well, one could say that glSDL implements it (it accelerates all blits
to the screen “surface”), but that’s just one of many targets, most of
which don’t and probably never will support accelerated alpha blending.

So, if I can get around to finish glSDL any year now, there will at least
be one target with full acceleration.

Maybe someone could port it to Direct3D/DirectGraphics eventually…?
(Seems like that would be easier than porting SDL’s DirectX code to
Direct3D/DirectGraphics, but I’m not sure. Maybe DirectGraphics is more
suitable for 2D than OpenGL, considering it’s actually replacing
DirectDraw in 8.x+.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 10 April 2002 22:30, Sam Lantinga wrote:

Hi!

Is there any timetable or plan for hardware alpha blending support in
SDL? Well, there is, but any knowledge about when it would be
released(ie. in a snapshot)

No timetable, just one of many ideas on the SDL 1.3 plate.
Keep in mind that even if added, the majority of targets won’t have it.

Is there any timetable or plan for hardware alpha blending support in SDL?
Well, there is, but any knowledge about when it would be released(ie. in
a snapshot)

No timetable, just one of many ideas on the SDL 1.3 plate.
Keep in mind that even if added, the majority of targets won’t have it.

Newbie questions:

Which kind of targets will be able to do hardware alpha blends?

Is the limitation that only certain chipsets support hw alpha? Or is
the limitation in the driver software?

Thanks,

-Martin

(complete SDL newbie, but the need for faster alpha blending is
what’s got me looking around at things like SDL, which looks quite
nice…)

Is there any timetable or plan for hardware alpha blending support
in SDL?

Well, there is, but any knowledge about when it would be
released(ie. in a snapshot)

No timetable, just one of many ideas on the SDL 1.3 plate.
Keep in mind that even if added, the majority of targets won’t have
it.

Newbie questions:

Which kind of targets will be able to do hardware alpha blends?

AFAIK…
Direct3D
DirectGraphics in DX8+ (basically D3D; there’s no DDraw in DX8+)
OpenGL (this is where glSDL comes in)

Possibly also
DirectFB (acceleration drivers for fbdev)
Latest and/or future XFree86 versions.
The new Microsoft GDI

Is the limitation that only certain chipsets support hw alpha? Or is
the limitation in the driver software?

There are both cases, but these days, it’s nearly always a driver or API
limitation.

Thanks,

-Martin

(complete SDL newbie, but the need for faster alpha blending is
what’s got me looking around at things like SDL, which looks quite
nice…)

Well, if you really need massive blending, perhaps you should forget
about 2D APIs and focus on OpenGL? I’m not recommending that in general,
but if the result is great, that might be enough motivation to use
OpenGL.

However, the more interesting advantage of using OpenGL IMHO, is that you
can achieve ultra smooth scrolling at any speed by using sub pixel
accurate scrolling. Simple demo:

http://olofson.net/mixed.html	(look for 'smoothscroll')

(More advanced version that eliminates the “weird” requirements on the
tiles and map to come eventually.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Sunday 14 April 2002 20:36, Martin McClure wrote:

(complete SDL newbie, but the need for faster alpha blending is
what’s got me looking around at things like SDL, which looks quite
nice…)

As one newbie to another, I might have a suggestion… you can mix OpenGL
with SDL (that’s what I’m up to), and you can do 2D stuff with OpenGL, which
gives you hardware-accelarated blending, amongst other things.

http://www.gamedev.net/ has several articles about doing 2D with OpenGL, and
http://nehe.gamedev.net/ has a large number of general OpenGL tutorials most
of which are ported to SDL, IIRC…

Here’s an article about doing 2D with OpenGL under SDL that was posted on
GameDev.net last week:

http://www.repository.ukgamers.net/ogl2d.html--
Chris Herborth (@Chris_Herborth)
Otaku no Sensei

In linux/x11/nVidia drivers, if I request a 32 bit depth from
SDL_SetVideoMode with OPENGL as one of the flags, I would expect that I
get a framebuffer with 8 bits per red, green, blue, and alpha.
Unfortunately, I get an “Couldn’t find matching GLX visual” error.

If on the other hand, I request a 24 bit depth and specify in advance
SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ), I do get a framebuffer with 8
bits per pixels RGBA.

Is this the “proper” behavior? It doesn’t work the way I would expect,
but perhaps I’m not aware of some subtlety. (Or something not so subtle!)

Thanks for any help!
Andrew

In linux/x11/nVidia drivers, if I request a 32 bit depth from
SDL_SetVideoMode with OPENGL as one of the flags, I would expect that I
get a framebuffer with 8 bits per red, green, blue, and alpha.
Unfortunately, I get an “Couldn’t find matching GLX visual” error.

If on the other hand, I request a 24 bit depth and specify in advance
SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ), I do get a framebuffer with 8
bits per pixels RGBA.

Is this the “proper” behavior? It doesn’t work the way I would expect,
but perhaps I’m not aware of some subtlety. (Or something not so subtle!)

You should probably use 0 for the bpp passed to SDL_SetVideoMode, and then
exclusively set your desired RGB components using SDL_GL_SetAttribute().

If you do this, do you get the desired result?

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Sortof, Sam’s suggestion should help. The problem is that as far as X is
concerned as of 4.x, what used to be 32 bit (RGB with padding) is now 24
bit. Remember that your framebuffer does not contain an alpha component
in the final framebuffer, usually.On Mon, Apr 22, 2002 at 10:29:27PM -0700, Andrew D Straw wrote:

In linux/x11/nVidia drivers, if I request a 32 bit depth from
SDL_SetVideoMode with OPENGL as one of the flags, I would expect that I
get a framebuffer with 8 bits per red, green, blue, and alpha.
Unfortunately, I get an “Couldn’t find matching GLX visual” error.

If on the other hand, I request a 24 bit depth and specify in advance
SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ), I do get a framebuffer with 8
bits per pixels RGBA.

Is this the “proper” behavior? It doesn’t work the way I would expect,
but perhaps I’m not aware of some subtlety. (Or something not so subtle!)


Joseph Carter Don’t feed the sigs

<doogie_> linux takes shit and turns it into something useful.
<doogie_> windows takes something useful and turns it into shit

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020423/0a7395ac/attachment.pgp

My understanding from the responses I’ve received so far is: when someone
requests a 32 bit depth for an OpenGL mode, s/he is requesting only 24
bits of color with 8 bits padding and no alpha. This is not what I would
naively expect. Is my expectation different from most other people’s for
some reason? Why would someone care to request how much padding they
want? Because I’m asking specifically about OpenGL, I think that the
implementation’s internal framebuffer format is something you want to stay
away from, and therefore requesting padding or not seems pointless.

My view is that if I request a 32 bit depth for an OpenGL mode, I want a
framebuffer with 8 bits for each of R,G,B, and A.

Sam, yes using the SDL_GL_SetAttribute to request alpha works, but seems
like a hack to me for the above reasons. Joeseph, I do want alpha in the
framebuffer for multi-pass operations (copying the framebuffer into a
texture). Even without multi-pass operations, alpha in the framebuffer is
essential when using blending functions involving the destination alpha.

This brings to mind another, only somewhat related question: does SDL
support creating an OpenGL mode with > 8 bits per pixel per color such as
theoretically available from new SGIs and Suns? I say "theorectically"
because I have not had an opportunity to verify that they work the way I
think they do. I think OpenGL colors, which are specified as floats would
be converted to 10 or 12 bit unsigned ints in the framebuffer rather than
8 bit unsigned ints I am used to.

I’m using SDL solely as a cross platform way to get an OpenGL
window/fullscreen mode without the tyranny of GLUT’s mainloop, and perhaps
I’d be better off with a different solution, especially considering I
would like my code to run on multiple screens and/or windows as well as
the above mentioned SGIs and Suns. Is there anything better for this
purpose?

Thanks!
Andrew

Why would someone care to request how
much padding they want?

If you’re low on video memory, you might in some cases prefer to use 24
bits (no padding), despite it usually being slower. (24 bit rendering may
well be faster than rendering from textures in system RAM.)

Because I’m asking specifically about OpenGL,
I think that the implementation’s internal framebuffer format is
something you want to stay away from, and therefore requesting padding
or not seems pointless.

Right, but we’re still living in a real world, where people don’t have
unlimited VRAM, or VRAM speed system memory without a bus bottleneck. :slight_smile:

My view is that if I request a 32 bit depth for an OpenGL mode, I want
a framebuffer with 8 bits for each of R,G,B, and A.

Then you’d have to tell OpenGL you want alpha bits as well. Just as an
example, I would assume 32 bpp means RGB with more than 8 bits per
channel. (That definitely wouldn’t hurt when doing multiple rendering
passes, so it’s most probably going to show up, even if consumer RAMDACs
stay 24 bit.)

Sam, yes using the SDL_GL_SetAttribute to request alpha works, but
seems like a hack to me for the above reasons.

To me, it’s just as obvious as using the same function to specify how
many bits you want for the other channels. :wink:

Joeseph, I do want
alpha in the framebuffer for multi-pass operations (copying the
framebuffer into a texture). Even without multi-pass operations, alpha
in the framebuffer is essential when using blending functions involving
the destination alpha.

Right. Would be kinda’ cool if one could send the alpha to a genlock as
well, instead of this dreadful “chroma key” crap. Then again, you can
just capture the video signal and stream it onto a procedural texture,
for the same effect. :slight_smile:

This brings to mind another, only somewhat related question: does SDL
support creating an OpenGL mode with > 8 bits per pixel per color such
as theoretically available from new SGIs and Suns? I say
"theorectically" because I have not had an opportunity to verify that
they work the way I think they do. I think OpenGL colors, which are
specified as floats would be converted to 10 or 12 bit unsigned ints in
the framebuffer rather than 8 bit unsigned ints I am used to.

That seems to be the idea, yes.

Anyway, I can’t see why SDL wouldn’t support it - but I can’t confirm
that it does. (However, I’m not sure SDL surfaces can have more than 32
bpp… You might be out of luck if you need to deal with >8 bits per
channel textures, that is. You may need external tools or custom code for
that.)

I’m using SDL solely as a cross platform way to get an OpenGL
window/fullscreen mode without the tyranny of GLUT’s mainloop, and
perhaps I’d be better off with a different solution, especially
considering I would like my code to run on multiple screens and/or
windows as well as the above mentioned SGIs and Suns. Is there anything
better for this purpose?

Using GLX directly, perhaps. Not exactly portable, though. (That’s one of
the major points with GLUT and SDL, obviously.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 24 April 2002 11:54, Andrew D Straw wrote:

My understanding from the responses I’ve received so far is: when someone
requests a 32 bit depth for an OpenGL mode, s/he is requesting only 24
bits of color with 8 bits padding and no alpha.

Setting the video depth with an OpenGL mode doesn’t do what you might expect.
The only way to specify the color depth is to use the GL attributes.

This brings to mind another, only somewhat related question: does SDL
support creating an OpenGL mode with > 8 bits per pixel per color such as
theoretically available from new SGIs and Suns? I say "theorectically"
because I have not had an opportunity to verify that they work the way I
think they do. I think OpenGL colors, which are specified as floats would
be converted to 10 or 12 bit unsigned ints in the framebuffer rather than
8 bit unsigned ints I am used to.

Theoretically it should work, if you specify them with the GL attributes,
but I don’t know of anybody who has actually tried it.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment