Accelerated OpenGL in SDL

I’m using OpenGL for some 2D acceleration,
and want to be able to specify a requirement
for hardware acceleration - since software
rendering is not a good alternative, and I
rather do my own regular code in that case…
(using the regular possibly colorkeyed blits)

I noticed that a flag for this was missing from
SDL version 1.2.6 / CVS, as far as I could tell ?

I suggest adding a new SDL_GLattr option of:
SDL_GL_ACCELERATION

Or perhaps the longer: SDL_GL_ACCELERATION_REQUIRED ?

It would work similar to SDL_GL_DOUBLEBUFFER,
and default to “0”. A set value of “1” would mean
that hardware is required, otherwise a software
fallback renderer is acceptable too (=old behaviour)
That is, I want it to fail to set up a video mode
if a suitable hardware accelerator cannot be found!

It’s trivial to add this for Macintosh and Windows,
using AGL_ACCELERATED/PFD_GENERIC_ACCELERATED flags.
(It probably has been discussed before, I’m new to SDL)

Best “solution” for Linux so far is checking
whether glGetString(GL_RENDERER) returns a
value equal to “Mesa GLX Indirect” (=software?)
Not sure if this is 100% correct, but seems to be?
Only pain being that this GL call needs an active
OpenGL context, in order to not just return NULL…
Anyone have any better ideas for X11? (or others)

–anders

PS. A full patch is available too, seems to be working…
http://www.algonet.se/~afb/SDL-1.2.6-glaccelerated.patch

Hi,

I’ve also suggested this before, and also submitted a patch long time
ago. It’s easy to simply add the PFD_GENERIC_ACCELERATED flag or similar
to SDL, but that doesn’t fix anything on windows, because the
ChoosePixelFormat function simply chooses the nearest match, so the
accelerated flag is not a requierement.

To fix that I suggested to use a custom ChoosePixelFormat function that
enumerates the existing pixel formats and chooses one that meets the
requierments, like in X11, or in the new ARB_PIXELFORMAT OpenGL extension.

However, that produces some artifacts. When the desired pixel format is
not found, the window is created and inmediately destroyed, and if a
display change is involved, that’s really annoying. So, I suggested to
show the window only after the creation of the opengl context.

I would be happy to submit more patches if people think this is the way
to go.–
Ignacio Castan~o
@Ignacio_Castano

Anders F Bjo"rklund wrote:

I’m using OpenGL for some 2D acceleration,
and want to be able to specify a requirement
for hardware acceleration - since software
rendering is not a good alternative, and I
rather do my own regular code in that case…
(using the regular possibly colorkeyed blits)

I noticed that a flag for this was missing from
SDL version 1.2.6 / CVS, as far as I could tell ?

I suggest adding a new SDL_GLattr option of:
SDL_GL_ACCELERATION

Or perhaps the longer: SDL_GL_ACCELERATION_REQUIRED ?

It would work similar to SDL_GL_DOUBLEBUFFER,
and default to “0”. A set value of “1” would mean
that hardware is required, otherwise a software
fallback renderer is acceptable too (=old behaviour)
That is, I want it to fail to set up a video mode
if a suitable hardware accelerator cannot be found!

It’s trivial to add this for Macintosh and Windows,
using AGL_ACCELERATED/PFD_GENERIC_ACCELERATED flags.
(It probably has been discussed before, I’m new to SDL)

Best “solution” for Linux so far is checking
whether glGetString(GL_RENDERER) returns a
value equal to “Mesa GLX Indirect” (=software?)
Not sure if this is 100% correct, but seems to be?
Only pain being that this GL call needs an active
OpenGL context, in order to not just return NULL…
Anyone have any better ideas for X11? (or others)

–anders

PS. A full patch is available too, seems to be working…
http://www.algonet.se/~afb/SDL-1.2.6-glaccelerated.patch

Best “solution” for Linux so far is checking
whether glGetString(GL_RENDERER) returns a
value equal to “Mesa GLX Indirect” (=software?)
Not sure if this is 100% correct, but seems to be?
Only pain being that this GL call needs an active
OpenGL context, in order to not just return NULL…
Anyone have any better ideas for X11? (or others)

There’s a function to do that directly.

glXIsDirect–
Petri Latvala
-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040103/b7296f36/attachment.pgp

Petri Latvala wrote:

Best “solution” for Linux so far is checking
whether glGetString(GL_RENDERER) returns a
value equal to “Mesa GLX Indirect” (=software?)
Not sure if this is 100% correct, but seems to be?
Only pain being that this GL call needs an active
OpenGL context, in order to not just return NULL…
Anyone have any better ideas for X11? (or others)

There’s a function to do that directly.

glXIsDirect

Thanks, that seems to be much cleaner/better…

It still requires the context to be created first,
but better than checking for magic version strings.
(I’m assuming here that direct means accelerated)

Wonder if this works out better than the old way I used ?

For some reason when I first tried to create a GL context
and failed, and then created a regular context instead,
my program would segfault - somewhere in the X11 routines.

If it doesn’t, I guess it’s time to do some debugging… :slight_smile:

–anders

PS. The flickering on Windows (the extra window) isn’t all
that bad ? A little annoying perhaps, but very quick.
Of course, if it can be totally avoided - the better.

Anders F Bj?rklund wrote:

Thanks, that seems to be much cleaner/better…

It still requires the context to be created first,
but better than checking for magic version strings.
(I’m assuming here that direct means accelerated)

“direct” means “bypasses the X server to send OpenGL commands to the 3D
hardware”.

So one way to get accelerated indirect rendering is to do OpenGL in X
over a network connection. In this case, the X server might have OpenGL
aceleration and use it, but rendering will be indirect, as all OpenGL
commands are sent over the network in GLX form.

Also, some 3D cards (i.e. my ATI Rage pro 3D under Xfree 3.3.6 with utah
glx) have an option to enable/disable DRI, and though using it provided
a good speedup, hardware acceleration was used in both cases.

I see I’m not very clear here ;), so here’s a picture for your viewing
pleasure :

Wonder if this works out better than the old way I used ?

Well, in light of what I just said… I don’t think so (although I don’t
have a better idea atm).

Stephane

I’m using OpenGL for some 2D acceleration,
and want to be able to specify a requirement
for hardware acceleration

Unfortunately there isn’t any good way of telling what OpenGL driver is in
use, and what features it accelerates in hardware. Long story short, the
best option is to provide some way of benchmarking the current configuration
with the features you use, and see if it’s fast enough. Make sure that you
provide a way for the user to re-configure once drivers have been updated.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Sam Lantinga wrote:

I’m using OpenGL for some 2D acceleration,
and want to be able to specify a requirement
for hardware acceleration

Unfortunately there isn’t any good way of telling what OpenGL driver
is in
use, and what features it accelerates in hardware. Long story short,
the
best option is to provide some way of benchmarking the current
configuration
with the features you use, and see if it’s fast enough. Make sure
that you
provide a way for the user to re-configure once drivers have been
updated.

Just thought it could be added to the API, since it was rather simple
to do on the other SDL platforms ? (I’m working on a Macintosh myself)

If the card didn’t accelerate every single function that is OK,
I just didn’t want it to choose the default software renderer…
Because then it would be no point of using OpenGL for me, and
I could stick to the regular surfaces and blits instead. (2D)

But the biggest problem I had was that SDL 1.2.6 crashed on Linux/X11
when requesting a regular surface, after first requesting a GL one.

Haven’t debugged it fully yet, but it looked something like this:

Program received signal SIGSEGV, Segmentation fault.

(gdb) bt
#0 0x0fba03e4 in XShmPutImage () from /usr/X11R6/lib/libXext.so.6
#1 0x0fba0360 in XShmPutImage () from /usr/X11R6/lib/libXext.so.6
#2 0x0ff94cec in SDL_UpdateRects (screen=0x0, numrects=1,
rects=0x7ffff518) at SDL_video.c:1039
#3 0x0ff94b7c in SDL_UpdateRect (screen=0x0, x=2139062143,
y=1208108072, w=1296651309, h=4278124287)
at SDL_video.c:980
#4 0x0ff94ee4 in SDL_Flip (screen=0x0) at SDL_video.c:1093
#5 0x0ff93d94 in SDL_ClearSurface (surface=0x10023050) at
SDL_video.c:500
#6 0x0ff941e8 in SDL_SetVideoMode (width=640, height=480, bpp=16,
flags=1073741825) at SDL_video.c:690

Unless something weird happened, I think the params above are bogus ?
(Probably a side effect of running an optimized library version.)

–anders

PS. I have some patches for the Carbon version of the Mac driver,
since I’m not really that fond of Objective-C and Quartz…
Will send those later on, probably not until SDL 1.2.7 is out.

Anders F Bj?rklund wrote:

PS. The flickering on Windows (the extra window) isn’t all
that bad ? A little annoying perhaps, but very quick.
Of course, if it can be totally avoided - the better.

It becomes a problem if you want to try different settings in order to
find one that works. For example, you may ask for an accelerated 32bit
color, 24bit depth, 8bit stencil and 4x multisampling visual and ask for
lower settings progresively if SetVideoMode fails.–
Ignacio Casta?o
@Ignacio_Castano