VC7, and 2D+3D, and blit performance

I’m replying to a digest to excuse me for sillyness in this mail…

The first thing I did was rebuild the libs/dlls with VC7. Before I did a
clean, it said “SDL - up-to-date” which means it doesn’t look like there is

There could be a problem with structure alignment. Try to modify the SDL
and SDLmain project settings so that they do have the 4 byte alignment
which SDL wants. I once had weird crashes when the byte alignment was not
set correctly from the project settings.

BTW., why does SDL want 4 byte alignment for structures? And why is this
not set in project settings for SDL, even thought that has been asked a
several time. Another change to VC project settings would be to rename the
debug DLL to SDLD.DLL, or SDL_g.DLL.

Also try deleting all .ncb files (while not running VC).

Well I downloaded the Visual Studio .NET SDK (the free version of VC7 from
Microsoft) and here is what I came up with.

What things does the SDK contain and what limitations does it have?

First of all I used libs compiled by VC6, because I could not find a makefile
interpreter for VC7.

No support for project files in the SDK?

is there a way to check whether the SDL-surface actually uses hardware
acceleration or not?

Yes - this is in the documentation. Search in the ‘video’ section and
read the various function docs until you see what you need. I think it’s
to do with the pixel format of the screen surface, but you can easily
find this out.

This might not work for OPENGL surfaces I think?

…and 800x600 @ 32-bit is going to be slow on hardware accelerated
surfaces, too. If you need this, use OpenGL, but more likely, you don’t
need it.

This certainly makes me think that there should be more opengl baseness in
SDL in the future. But 16-bit should do most of the time though.

So perhaps gluLookAt may be a bit faster. I doubt it is easier for most
though. Also, you can increase the glOrtho performance if you build the
ortho view once and keep it stored in memory. For gluLookAt you’d have to
calculate it at least each time the camera changes position / angle.

FYI, both gluLookAt() and glOrtho() do some calculaons and then set up a
view matrix. Both are quite short and need only a few dozen mathemathical
operations. Neither of them are implemented in hardware AFAIK even on
hardware accelerated cards for PC. The maths are more complex for
gluLookAt(), but if you switch from 3D to 2D only once per frame, it
certainly makes no difference in fps at all.

So when an OpenGL display mode gets initialized, all SDL hardware surfaces
should be created as 2D textures and blits would be done with textured
quads. Software surfaces should stay in system memory and be copyed into
textures when blitting.

That’s what I have in mind too. Making OpenGL a backend (or target) just
like X11, svgalib or fbcon.

Actually I think there should be some changes, maybe a whole different
library, for SDLonOpenGL. A lot of SDL is not needed when using SDL for
OpenGL only, and some other things are. IMHO the whole philosophy for
hardware accelerated 2D through OpenGL is quite different from current SDL
philosphy.

– Timo Suoranta – @Timo_K_Suoranta

So when an OpenGL display mode gets initialized, all SDL hardware
surfaces

should be created as 2D textures and blits would be done with textured
quads. Software surfaces should stay in system memory and be copyed into
textures when blitting.

That’s what I have in mind too. Making OpenGL a backend (or target) just
like X11, svgalib or fbcon.

Actually I think there should be some changes, maybe a whole different
library, for SDLonOpenGL. A lot of SDL is not needed when using SDL for
OpenGL only, and some other things are. IMHO the whole philosophy for
hardware accelerated 2D through OpenGL is quite different from current SDL
philosphy.

Here are some ideas to implement SDL graphics techniques in OpenGL:

SDL_HWSURFACE: Pixels are stored as a 2D texture in video memory. The other
attributes remain in systemmemory.

Locking -> a glGetTexImage(GL_TEXTURE2D, …)
Unlocking -> glTex(Sub)Image2D(…)

SetColorKey -> Would require getting the RGB texture image, creating the
alpha components based on the colorkey and then storing the texture back in
OpenGL. Blitting with a color key is then the same as blitting with alpha.
This is probably one of the bigger ‘philosophy differences’.

Blitting -> Simply drawing a textured quad. If an alpha color / an alpha
channel / a colorkey is used, blending should be enabled. I’m not 100% sure
which blending equation would best match SDL’s though. I think it would be
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALHPA) Depth testing should of
course be disabled and the view should be orthographic. It would benefit
performance if the blits were qeued up.

SDL_SWSURFACE: Same as in normal SDL. However, when it is being blitted, it
is temporarily stored in texture memory. (This is most likely very very
slow.) An alternative is glDrawPixels for blitting it. (Also VERY slow.)

As can be seen, there are indeed some issues that would need resolving, but
I don’t think that the difference in philosophy is so big that it would
require a different library. I do believe though, that SDL should not choose
for OpenGL as a backend for itself, like it does with X11, DX, etc. A good
idea IMHO would be to have the programmer choose for OpenGL 2D rendering,
perhaps with a new flag in the SDL_SetVideoMode function, like
SDL_OPENGLBLIT2 or something. Also, since SDL 1.3 will be a near full
rewrite according to the FAQ, maybe it’s architecture will be changed to
accomodate for OpenGL 2D rendering more easily?

Regards,

Dirk Gerrits

Well I downloaded the Visual Studio .NET SDK (the free version of VC7
from Microsoft) and here is what I came up with.

What things does the SDK contain and what limitations does it have?

The SDK contains the command line version of VC7, it has all the basic
headers and libraries along with cl.exe(the compiler) and link.exe(the
linker). The limitations are that you don’t get an IDE, you don’t get any
extra Libs(just the standard C stuff), and also the compiler they give out in
the SDK is not optimized.

First of all I used libs compiled by VC6, because I could not find a
makefile interpreter for VC7.

No support for project files in the SDK?

There is no IDE that comes with the SDK. Also, there doesn’t seem to be a
program like nmake.exe which allows you to compile from makefiles either. So
basically you are stuck compiling by hand.–
Jordan Wilberding <@Jordan_Wilberding>
Diginux.net Sys Admin

<wod.sourceforge.net>
<aztec.sourceforge.net>

“Fight war, not wars,
destroy power, not people”
-Crass

You’re both absolutely right and completely wrong. SDL’s primary focus,
philosophically speaking, can probably be explained best by saying that it
strives to be a portable DirectX. That’s not quite the best analogy, but
it’s as close as you’re likely to get in a simple comparison.

If you’re doing just OpenGL stuff, SDL is possibly the wrong library
because SDL includes support for sound, virtual files, threads, and more.
You can compile SDL without most of that, but really you kinda want it to
support all of that if you plan to do something more complex. Even the 2D
graphics functions (which don’t take all that much space really) are at
times useful - though not often as useful as they could be.

The real problem with SDL and OpenGL is that OpenGL support was really an
afterthought. SDL_Surfaces are at best clumsy for OpenGL textures, and
the fake one you get back when you set up an OpenGL context is mostly just
confusing to a newbie since a non-NULL surface doesn’t mean necessarily
that you actually have successfully created an OpenGL context. And then
there’s SDL_OPENGLBLIT, which is just a thing that should never have been.

Additionally, many of the SDL_things people have written work poorly or
not at all with OpenGL. All of the stuff in the SDL repository does AFAIK
work with OpenGL, if after you jump through a few hoops and waste CPU
cycles converting every SDL_Surface you use to a format you know is
supported by OpenGL.

The stuff outside SDL’s CVS repository may or may not work with OpenGL, or
may just work wrong. SDL_console is a thing I’d love to recommend for the
newbies, but it “supports” OpenGL using SDL_OPENGLBLIT. That’s just the
first example that pops into my head, but there are a number of others.

There has been talk of API changes in SDL 1.3, but not much discussion of
specifics yet. If 1.3 kills OPENGLBLIT for good, that’ll be a major step
in the right direction IMO, but better OpenGL integration would certainly
help too. Right now it’s not as easy as IMO it should be for an OpenGL
coder to use SDL to its full potential.On Thu, Feb 21, 2002 at 03:11:13PM +0200, Timo K Suoranta wrote:

That’s what I have in mind too. Making OpenGL a backend (or target) just
like X11, svgalib or fbcon.

Actually I think there should be some changes, maybe a whole different
library, for SDLonOpenGL. A lot of SDL is not needed when using SDL for
OpenGL only, and some other things are. IMHO the whole philosophy for
hardware accelerated 2D through OpenGL is quite different from current SDL
philosphy.


Joseph Carter You’re entitled to my opinion

Adamel, i think the code you fixed of mine didn’t work
i must not have commited the working code
raptor: like it’s the first time THAT has ever happened =p

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020221/657e29cf/attachment.pgp

There has been talk of API changes in SDL 1.3, but not much discussion of
specifics yet. If 1.3 kills OPENGLBLIT for good, that’ll be a major step
in the right direction IMO, but better OpenGL integration would certainly
help too. Right now it’s not as easy as IMO it should be for an OpenGL
coder to use SDL to its full potential.

Just FYI, SDL’s 3D support really is designed to just set up a window and
a context and let you go to town with OpenGL. If you’re doing that, you
probably shouldn’t be much of SDL’s other video functionality at all, and
that’s by design.

The basic philosophy is: Set it up and get out of the way.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Joseph Carter wrote:

The real problem with SDL and OpenGL is that OpenGL support was really an
afterthought. SDL_Surfaces are at best clumsy for OpenGL textures, and
the fake one you get back when you set up an OpenGL context is mostly just
confusing to a newbie since a non-NULL surface doesn’t mean necessarily
that you actually have successfully created an OpenGL context. And then
there’s SDL_OPENGLBLIT, which is just a thing that should never have been.

Yes, hopefully SDL 1.3 will have better OpenGL support.

Additionally, many of the SDL_things people have written work poorly or
not at all with OpenGL. All of the stuff in the SDL repository does AFAIK
work with OpenGL, if after you jump through a few hoops and waste CPU
cycles converting every SDL_Surface you use to a format you know is
supported by OpenGL.

Don’t really know about this, I use SDL primarily for setting up the OpenGL
context, input and audio. I don’t really use any SDL_Surface functions at
the moment because I use my own Win32 texture loaders. But I think I will
change to SDL_image soon and then this might pose a problem.

The stuff outside SDL’s CVS repository may or may not work with OpenGL, or
may just work wrong. SDL_console is a thing I’d love to recommend for the
newbies, but it “supports” OpenGL using SDL_OPENGLBLIT. That’s just the
first example that pops into my head, but there are a number of others.

There has been talk of API changes in SDL 1.3, but not much discussion of
specifics yet. If 1.3 kills OPENGLBLIT for good, that’ll be a major step
in the right direction IMO, but better OpenGL integration would certainly
help too. Right now it’s not as easy as IMO it should be for an OpenGL
coder to use SDL to its full potential.

Sam Lantinga wrote:

Just FYI, SDL’s 3D support really is designed to just set up a window and
a context and let you go to town with OpenGL. If you’re doing that, you
probably shouldn’t be much of SDL’s other video functionality at all, and
that’s by design.

The basic philosophy is: Set it up and get out of the way.

Setting up a window and context is just about all the OpenGL stuff you want
SDL to do and it already does that. The only thing I can possibly think of
that I’d like to see in the next SDL’s OpenGL section, is an easier way to
load SDL_Surfaces into textures.

Also I don’t have any problem with killing off the current OPENGLBLIT.
The docs won’t say “This option is kept for compatibility only, andis
not recommended for new code.” for nothing. :wink: But I’d also be in favor
of a brand new OPENGLBLIT as a 2D renderer as is being discussed in this
thread.

Dirk Gerrits

There is no IDE that comes with the SDK. Also, there doesn’t seem to
be a
program like nmake.exe which allows you to compile from makefiles
either. So
basically you are stuck compiling by hand.

That is because Microsoft expects you to install it over Visual Studio
6. If you have the “educational” edition of Visual C++ 6, that
essentially upgrades it, since the educational version doesn’t optimize
either.

Additionally, many of the SDL_things people have written work poorly or
not at all with OpenGL. All of the stuff in the SDL repository does AFAIK
work with OpenGL, if after you jump through a few hoops and waste CPU
cycles converting every SDL_Surface you use to a format you know is
supported by OpenGL.

Don’t really know about this, I use SDL primarily for setting up the OpenGL
context, input and audio. I don’t really use any SDL_Surface functions at
the moment because I use my own Win32 texture loaders. But I think I will
change to SDL_image soon and then this might pose a problem.

There is a function you can write to use with SDL_image and SDL_ttf. You
just translate the textures from the format these give you and a known
uploadable format like RGBA. (Actually, most OpenGL implementations
support RGB or BGR with alpha either before or after.) You can attempt to
figure out if a SDL_Surface needs no conversion and upload it wholesale,
but doing THAT is just a mess. I was going to code a function to do it
that actually compiled, but I don’t have all night, so you’ll have to
settle for the snippet version:

if (surf->fmt->palette)
{
    /*
     * If many of your graphics are paletted, you want to avoid
     * all of the other conditions for speed reasons.
     */
    GL_SurfaceToRGBA (surf);
    tex = GL_UploadSurface (surf, GL_RGBA);
}
else if (surf->fmt->BytesPerPixel == 4
	&& surf->fmt->BitsPerPixel == 32
	&& surf->fmt->Rmask == SDL_SwapBE32 (0x000000ff)
	&& surf->fmt->Gmask == SDL_SwapBE32 (0x0000ff00)
	&& surf->fmt->Bmask == SDL_SwapBE32 (0x00ff0000)
	&& surf->fmt->Amask == SDL_SwapBE32 (0xff000000))
    tex = GL_UploadSurface (surf, GL_ABGR);
else if (surf->fmt->BytesPerPixel == 4
	&& surf->fmt->BitsPerPixel == 32
	&& surf->fmt->Rmask == SDL_SwapBE32 (0xff000000)
	&& surf->fmt->Gmask == SDL_SwapBE32 (0x00ff0000)
	&& surf->fmt->Bmask == SDL_SwapBE32 (0x0000ff00)
	&& surf->fmt->Amask == SDL_SwapBE32 (0x000000ff))
   tex = GL_UploadSurface (surf, GL_RGBA);
else if (surf->fmt->BytesPerPixel == 3
	&& surf->fmt->BitsPerPixel == 24
	...)
    :
else
{
    GL_SurfaceToRGBA (surf);
    tex = GL_UploadSurface (surf, GL_RGBA);
}

Maybe I’m not even doing that right with the surf->fmt checks, I didn’t
look and it’s not taken from working code. Finding out what the format of
a surface ACTUALLY IS takes almost as much real processing time as just
assuming the worst (that the surface must be converted to RGBA for maximum
compatibility with all OpenGL versions) and doing that. It’s certainly
less ugly that way.

The simple truth is that SDL_Surface is an unreasonable texture format for
OpenGL applications. There are just too many damned variables to work
with unless you already know exactly what they are. And the only function
found in SDL or companions which lets you do this is SDL_ConvertSurface.
Another point, most 3D-accellerated cards for the Wintel platform store
their graphics in BGRA format natively, so you’re essentially wasting CPU
power converting them twice.

It gets better because targetting BGRA will cause double-conversion for
other people. You can either try to figure out what hardware you have and
use the correct conversion for the native hardware or you can just accept
that you can’t do anything about it.

Sam Lantinga wrote:

Just FYI, SDL’s 3D support really is designed to just set up a window and
a context and let you go to town with OpenGL. If you’re doing that, you
probably shouldn’t be much of SDL’s other video functionality at all, and
that’s by design.

The basic philosophy is: Set it up and get out of the way.

Setting up a window and context is just about all the OpenGL stuff you want
SDL to do and it already does that. The only thing I can possibly think of
that I’d like to see in the next SDL’s OpenGL section, is an easier way to
load SDL_Surfaces into textures.

I think I’ve proven the point above about just how bad SDL_Surfaces really
are for OpenGL. I see two solutions to the problem:

  1. Add a Uint32 field for a (simple) format description. This should be
    set by SDL_CreateRGBSurface to 0 for unknown, unless SDL recognizes
    the arguments as being a common format. This won’t fix everything of
    course, but it will let an OpenGL coder know when he needs to fix them
    himself. The advantage is simplicity, though it would involve ABI
    incompatibility and a necessary soname bump and other unfortunate
    things. If ABI breaks, it’s probably better to fix the problem more
    completely.

  2. Add some method of telling SDL and companion libs what surface formats
    we can use. This is a better solution, but requires API and ABI
    changes, a cooperative effort among developers, and really feels like
    too big a problem for one person such as myself not actively involved
    in SDL’s development to try and solve.

Also I don’t have any problem with killing off the current OPENGLBLIT.
The docs won’t say “This option is kept for compatibility only, andis
not recommended for new code.” for nothing. :wink: But I’d also be in favor
of a brand new OPENGLBLIT as a 2D renderer as is being discussed in this
thread.

I’m not convinced that’s a good idea yet. The software code in SDL is not
much bloat in OpenGL if you don’t use it, but this could prove to be a
different story all together. In order for this to be effective, SDL will
have to incorporate an OpenGL texture manager and cope with hardware
limits in the OpenGL drivers and older hardware (some cards have 256x256
max limit on textures, some have a limited number of texture slots, etc)
and it all sounds a bit high level for SDL. This would best be done by a
companion lib I think.On Thu, Feb 21, 2002 at 06:09:21PM +0100, Dirk Gerrits wrote:


Joseph Carter You expected a coherent reply?

“As you journey through life take a minute every now and then to give a
thought for the other fellow. He could be plotting something.”
– Hagar the Horrible

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020221/bcaf865d/attachment.pgp

Don’t really know about this, I use SDL primarily for setting up the
OpenGL

context, input and audio. I don’t really use any SDL_Surface functions at
the moment because I use my own Win32 texture loaders. But I think I will
change to SDL_image soon and then this might pose a problem.

There is a function you can write to use with SDL_image and SDL_ttf. You
just translate the textures from the format these give you and a known
uploadable format like RGBA. (Actually, most OpenGL implementations
support RGB or BGR with alpha either before or after.) You can attempt to
figure out if a SDL_Surface needs no conversion and upload it wholesale,
but doing THAT is just a mess. I was going to code a function to do it
that actually compiled, but I don’t have all night, so you’ll have to
settle for the snippet version:

[cut]

Maybe I’m not even doing that right with the surf->fmt checks, I didn’t
look and it’s not taken from working code. Finding out what the format of
a surface ACTUALLY IS takes almost as much real processing time as just
assuming the worst (that the surface must be converted to RGBA for maximum
compatibility with all OpenGL versions) and doing that. It’s certainly
less ugly that way.

It’s of course very nice that you’re sharing this with the rest of us but I
can write my own. You did give me some ideas though.

The simple truth is that SDL_Surface is an unreasonable texture format for
OpenGL applications. There are just too many damned variables to work
with unless you already know exactly what they are. And the only function
found in SDL or companions which lets you do this is SDL_ConvertSurface.
Another point, most 3D-accellerated cards for the Wintel platform store
their graphics in BGRA format natively, so you’re essentially wasting CPU
power converting them twice.

An SDL_Surface is simply very generic. (Actually it’s really the
SDL_PixelFormat that is very generic.) It can handle practically anything.
I think this is a very good thing, even though it complicates matters in
many cases.

It gets better because targetting BGRA will cause double-conversion for
other people. You can either try to figure out what hardware you have and
use the correct conversion for the native hardware or you can just accept
that you can’t do anything about it.

As for OpenGL textures, yes SDL_ConvertSurface is probably necessary in
80% of all cases, but it’s not necessary to convert BGRA to RGBA and back
again for OpenGL 1.2 and above. (Or OpenGL 1.1/1.0 with GL_EXT_bgra.)
OpenGL 1.2 also adds other pixelformats that can be loaded
(GL_EXT_packed_pixels). However, when those are not supported I guess you’ll
almost always need SDL_ConvertSurface.

Setting up a window and context is just about all the OpenGL stuff you
want

SDL to do and it already does that. The only thing I can possibly think
of

that I’d like to see in the next SDL’s OpenGL section, is an easier way
to

load SDL_Surfaces into textures.

I think I’ve proven the point above about just how bad SDL_Surfaces really
are for OpenGL. I see two solutions to the problem:

  1. Add a Uint32 field for a (simple) format description. This should be
    set by SDL_CreateRGBSurface to 0 for unknown, unless SDL recognizes
    the arguments as being a common format. This won’t fix everything of
    course, but it will let an OpenGL coder know when he needs to fix them
    himself. The advantage is simplicity, though it would involve ABI
    incompatibility and a necessary soname bump and other unfortunate
    things. If ABI breaks, it’s probably better to fix the problem more
    completely.

Seems like a good idea. Although it is relatively easy to write your own
function that changes a pixelformat in such and Uint32.

  1. Add some method of telling SDL and companion libs what surface formats
    we can use. This is a better solution, but requires API and ABI
    changes, a cooperative effort among developers, and really feels like
    too big a problem for one person such as myself not actively involved
    in SDL’s development to try and solve.

I’m not really sure this solution would be all that much better. But it
would improve texture loading performance.

Also I don’t have any problem with killing off the current OPENGLBLIT.
The docs won’t say “This option is kept for compatibility only, andis
not recommended for new code.” for nothing. :wink: But I’d also be in favor
of a brand new OPENGLBLIT as a 2D renderer as is being discussed in this
thread.

I’m not convinced that’s a good idea yet. The software code in SDL is not
much bloat in OpenGL if you don’t use it, but this could prove to be a
different story all together. In order for this to be effective, SDL will
have to incorporate an OpenGL texture manager and cope with hardware
limits in the OpenGL drivers and older hardware (some cards have 256x256
max limit on textures, some have a limited number of texture slots, etc)
and it all sounds a bit high level for SDL. This would best be done by a
companion lib I think.

I really don’t know what more texture management would need to be done,
other than storing an Uint32 OpenGL texture ID in SDL_Surfaces.

Supporting older cards and drivers could be a real problem though.

Dirk Gerrits

It’s of course very nice that you’re sharing this with the rest of us but I
can write my own. You did give me some ideas though.

Ot was mecessary. =| Every time someone has posted similar before,
someone would complain that it didn’t work right. Well, mine works right.
It’s a total bitch to use, but it DOES work.

The simple truth is that SDL_Surface is an unreasonable texture format for
OpenGL applications. There are just too many damned variables to work
with unless you already know exactly what they are. And the only function
found in SDL or companions which lets you do this is SDL_ConvertSurface.
Another point, most 3D-accellerated cards for the Wintel platform store
their graphics in BGRA format natively, so you’re essentially wasting CPU
power converting them twice.

An SDL_Surface is simply very generic. (Actually it’s really the
SDL_PixelFormat that is very generic.) It can handle practically anything.
I think this is a very good thing, even though it complicates matters in
many cases.

The pixel format definition is the whole of the problem. While you can
trivially generate that structure from an OpenGL format description, you
cannot go the other direction. Problem is that for an OpenGL coder, that
is exactly what you need to do!

It gets better because targetting BGRA will cause double-conversion for
other people. You can either try to figure out what hardware you have and
use the correct conversion for the native hardware or you can just accept
that you can’t do anything about it.

As for OpenGL textures, yes SDL_ConvertSurface is probably necessary in
80% of all cases, but it’s not necessary to convert BGRA to RGBA and back
again for OpenGL 1.2 and above. (Or OpenGL 1.1/1.0 with GL_EXT_bgra.)
OpenGL 1.2 also adds other pixelformats that can be loaded
(GL_EXT_packed_pixels). However, when those are not supported I guess you’ll
almost always need SDL_ConvertSurface.

Do you think your shiny NVIDIA card takes everything from 2 to 10 bits per
color channel in RGB, BGR, RGBA, ARGB, BGRA, and ABGR formats as well as
the various alpha, luminance, other formats supported? I don’t know
exactly which of those formats it does support, but I can guess. I can
guess that it takes only ABGR and I’d probably be right. The driver may
take other formats, but will have to convert them before it can upload.

You can’t really be expected to figure out the native format the hardware
uses. But by using SDL_ConvertSurface, you’re guaranteeing that all of
your textures will be converted at least once, quite probably twice. That
is simply a waste of CPU.

I think I’ve proven the point above about just how bad SDL_Surfaces really
are for OpenGL. I see two solutions to the problem:

  1. Add a Uint32 field for a (simple) format description. This should be
    set by SDL_CreateRGBSurface to 0 for unknown, unless SDL recognizes
    the arguments as being a common format. This won’t fix everything of
    course, but it will let an OpenGL coder know when he needs to fix them
    himself. The advantage is simplicity, though it would involve ABI
    incompatibility and a necessary soname bump and other unfortunate
    things. If ABI breaks, it’s probably better to fix the problem more
    completely.

Seems like a good idea. Although it is relatively easy to write your own
function that changes a pixelformat in such and Uint32.

It’s not really very easy or very efficient. The code is also pretty ugly
looking if it is at all complete.

  1. Add some method of telling SDL and companion libs what surface formats
    we can use. This is a better solution, but requires API and ABI
    changes, a cooperative effort among developers, and really feels like
    too big a problem for one person such as myself not actively involved
    in SDL’s development to try and solve.

I’m not really sure this solution would be all that much better. But it
would improve texture loading performance.

My theory is that it might work well when combined with something like the
first solution. A way to tell SDL that you need it to keep the surfaces
simple combined with an easier way of identifying the format of a simple
surface is exactly what an OpenGL coder needs. Maybe something similar to
the mechanism used now for audio contexts? You tell SDL what you need and
it tries to match it as best it can… Don’t know how you’d handle the
case where you NEED a particular format and can’t get it, though.

I’m not convinced that’s a good idea yet. The software code in SDL is not
much bloat in OpenGL if you don’t use it, but this could prove to be a
different story all together. In order for this to be effective, SDL will
have to incorporate an OpenGL texture manager and cope with hardware
limits in the OpenGL drivers and older hardware (some cards have 256x256
max limit on textures, some have a limited number of texture slots, etc)
and it all sounds a bit high level for SDL. This would best be done by a
companion lib I think.

I really don’t know what more texture management would need to be done,
other than storing an Uint32 OpenGL texture ID in SDL_Surfaces.

I suppose you probably could get by with that for 2D stuff. Won’t work if
you need to mipmap the textures and be able to reulplad them, but you
won’t need to mipmap 2D stuff anyway.

Supporting older cards and drivers could be a real problem though.

It always is. =pOn Fri, Feb 22, 2002 at 02:14:14PM +0100, Dirk Gerrits wrote:


Joseph Carter Sanity is counterproductive

I would rather spend 10 hours reading someone else’s source code than 10
minutes listening to Musak waiting for technical support which isn’t.
– Dr. Greg Wettstein, Roger Maris Cancer Center

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020222/591b4b74/attachment.pgp

An SDL_Surface is simply very generic. (Actually it’s really the
SDL_PixelFormat that is very generic.) It can handle practically
anything.

I think this is a very good thing, even though it complicates matters in
many cases.

The pixel format definition is the whole of the problem. While you can
trivially generate that structure from an OpenGL format description, you
cannot go the other direction. Problem is that for an OpenGL coder, that
is exactly what you need to do!

Good points.

It gets better because targetting BGRA will cause double-conversion for
other people. You can either try to figure out what hardware you have
and

use the correct conversion for the native hardware or you can just
accept

that you can’t do anything about it.

As for OpenGL textures, yes SDL_ConvertSurface is probably necessary in
80% of all cases, but it’s not necessary to convert BGRA to RGBA and back
again for OpenGL 1.2 and above. (Or OpenGL 1.1/1.0 with GL_EXT_bgra.)
OpenGL 1.2 also adds other pixelformats that can be loaded
(GL_EXT_packed_pixels). However, when those are not supported I guess
you’ll

almost always need SDL_ConvertSurface.

Do you think your shiny NVIDIA card takes everything from 2 to 10 bits per
color channel in RGB, BGR, RGBA, ARGB, BGRA, and ABGR formats as well as
the various alpha, luminance, other formats supported? I don’t know
exactly which of those formats it does support, but I can guess. I can
guess that it takes only ABGR and I’d probably be right. The driver may
take other formats, but will have to convert them before it can upload.

First, I don’t have a shiny NVIDIA card, mine is an ATi Radeon 8500.
Second, no it does probably not support loading of all pixel formats in
hardware. But the driver does and the drivers conversion routines are most-
likely a lot faster than anything you’d write yourself. For example, my
driver is optimized for MMX, 3DNow! and WinXP because I’m running WinXP on
an Athlon. These could be used in the drivers to optimize the conversion to
the max.

You can’t really be expected to figure out the native format the hardware
uses. But by using SDL_ConvertSurface, you’re guaranteeing that all of
your textures will be converted at least once, quite probably twice. That
is simply a waste of CPU.

Quite true, sadly. But if the surface can be loaded to OpenGL without a
ConvertSurface than only the driver will have to perform a convert. (Much
more
efficient.)

I think I’ve proven the point above about just how bad SDL_Surfaces
really

are for OpenGL. I see two solutions to the problem:

  1. Add a Uint32 field for a (simple) format description. This should
    be

set by SDL_CreateRGBSurface to 0 for unknown, unless SDL recognizes
the arguments as being a common format. This won’t fix everything
of

course, but it will let an OpenGL coder know when he needs to fix
them

himself. The advantage is simplicity, though it would involve ABI
incompatibility and a necessary soname bump and other unfortunate
things. If ABI breaks, it’s probably better to fix the problem more
completely.

Seems like a good idea. Although it is relatively easy to write your own
function that changes a pixelformat in such and Uint32.

It’s not really very easy or very efficient. The code is also pretty ugly
looking if it is at all complete.

I didn’t have a compiler handy at the time I wrote that. I don’t know what I
was thinking. :slight_smile: Indeed this is not very trivial at all. But I think it can
be
done.

I was thinking about using a function like: Uint8 CountBits(Uint32 bitarray)
to count the bits in each of the individual masks. That would determine the
number of bits per color channel. If these do not conform to anything OpenGL
supports, we can set the field to 0 and stop. If they do conform, we still
have to check the shifts to check the alignment and order of the color
channels.

BTW, do you happen to know what role {RGBA}loss would play in this?

  1. Add some method of telling SDL and companion libs what surface
    formats

we can use. This is a better solution, but requires API and ABI
changes, a cooperative effort among developers, and really feels
like

too big a problem for one person such as myself not actively
involved

in SDL’s development to try and solve.

I’m not really sure this solution would be all that much better. But it
would improve texture loading performance.

My theory is that it might work well when combined with something like the
first solution. A way to tell SDL that you need it to keep the surfaces
simple combined with an easier way of identifying the format of a simple
surface is exactly what an OpenGL coder needs. Maybe something similar to
the mechanism used now for audio contexts? You tell SDL what you need and
it tries to match it as best it can… Don’t know how you’d handle the
case where you NEED a particular format and can’t get it, though.

Ah I think I misunderstood your initial explanation. Yes this does sound
like
a very good idea. But you do realize that that would not decrease the number
of
converts do you? It just simplifies the matter for the OpenGL coder because
now IMG_Load (for example) converts the loaded images for him. But I guess
the
converts can’t be avoided so having OpenGL do them would be cool.

I’m not convinced that’s a good idea yet. The software code in SDL is
not

much bloat in OpenGL if you don’t use it, but this could prove to be a
different story all together. In order for this to be effective, SDL
will

have to incorporate an OpenGL texture manager and cope with hardware
limits in the OpenGL drivers and older hardware (some cards have 256x256
max limit on textures, some have a limited number of texture slots, etc)
and it all sounds a bit high level for SDL. This would best be done by
a

companion lib I think.

I really don’t know what more texture management would need to be done,
other than storing an Uint32 OpenGL texture ID in SDL_Surfaces.

I suppose you probably could get by with that for 2D stuff. Won’t work if
you need to mipmap the textures and be able to reulplad them, but you
won’t need to mipmap 2D stuff anyway.

Exactly.

Supporting older cards and drivers could be a real problem though.

It always is. =p

Very true. :slight_smile:

BTW, I do think 2D OpenGL SDL rendering will be problematic when mixed with
OpenGL rendering by the programmer. (SDL-OpenGL states interfering with
programmer-OpenGL states.) So setting up an OpenGL context for your own use
and having SDL render using OpenGL would probably need to be two different
capabilities.

What do you think?

Dirk Gerrits

Ok so now I’ve got the sdl_net example to compile. I
was wondering if any of you had any recomendations as
for as what I should be reading before trying to make
a multiplayer game.

Thanks
Herbert=====
Game Programming Groups
VS Entertainment(Houston game dev group, looking for members)
IGP(Internet game programming group, looking for members also)
Ask me for details…


Do You Yahoo!?
Yahoo! Sports - Coverage of the 2002 Olympic Games
http://sports.yahoo.com

[… Even shiny NVIDIA cards don’t support all pixel formats …]

First, I don’t have a shiny NVIDIA card, mine is an ATi Radeon 8500.
Second, no it does probably not support loading of all pixel formats in
hardware. But the driver does and the drivers conversion routines are most-
likely a lot faster than anything you’d write yourself. For example, my
driver is optimized for MMX, 3DNow! and WinXP because I’m running WinXP on
an Athlon. These could be used in the drivers to optimize the conversion to
the max.

Well if my shiny NVIDIA card doesn’t support everything, I know for
certain that your shiny ATI doesn’t either. :wink: GeForce two-thirds and
all of that. (Don’t ask, read the sig, my sig picker is an AI.) There is
a potential for driver optimization, though.

You can’t really be expected to figure out the native format the hardware
uses. But by using SDL_ConvertSurface, you’re guaranteeing that all of
your textures will be converted at least once, quite probably twice. That
is simply a waste of CPU.

Quite true, sadly. But if the surface can be loaded to OpenGL without a
ConvertSurface than only the driver will have to perform a convert. (Much
more
efficient.)

Indeed, which was my original point - that doing so is nontrivial to the
point that few who try manage to get it right. Loading textures into
OpenGL is a raindrop in the ocean of OpenGL. I can imagine most getting
frustum calculation or some other complex thing with only a few arbitrary
guidelines to help you figure out what numbers to use. But this isn’t one
of those things.

Seems like a good idea. Although it is relatively easy to write your own
function that changes a pixelformat in such and Uint32.

It’s not really very easy or very efficient. The code is also pretty ugly
looking if it is at all complete.

I didn’t have a compiler handy at the time I wrote that. I don’t know what I
was thinking. :slight_smile: Indeed this is not very trivial at all. But I think it can
be
done.

Indeed it’s not.

I was thinking about using a function like: Uint8 CountBits(Uint32 bitarray)
to count the bits in each of the individual masks. That would determine the
number of bits per color channel. If these do not conform to anything OpenGL
supports, we can set the field to 0 and stop. If they do conform, we still
have to check the shifts to check the alignment and order of the color
channels.

BTW, do you happen to know what role {RGBA}loss would play in this?

In this? None at all. {RGBA}shift also has none. You only need the bits
and bytes per pixel and the masks. The masks contain loss and shift info
embedded.

My theory is that it might work well when combined with something like the
first solution. A way to tell SDL that you need it to keep the surfaces
simple combined with an easier way of identifying the format of a simple
surface is exactly what an OpenGL coder needs. Maybe something similar to
the mechanism used now for audio contexts? You tell SDL what you need and
it tries to match it as best it can… Don’t know how you’d handle the
case where you NEED a particular format and can’t get it, though.

Ah I think I misunderstood your initial explanation. Yes this does sound
like
a very good idea. But you do realize that that would not decrease the number
of
converts do you? It just simplifies the matter for the OpenGL coder because
now IMG_Load (for example) converts the loaded images for him. But I guess
the
converts can’t be avoided so having OpenGL do them would be cool.

Yes it would. I am assuming from the outset that SOME image processing is
necessary regardless of the format on disk. I did not count it as a third
conversion because that conversion is overshadowed by the disk I/O. The
time spent waiting for the disk to give you the file is greater than that
to convert the image, so it’s essentially inconsiquential.

And the merit of having SDL_Surfaces work in OpenGL transparently cannot
easily be overlooked. I don’t see a reason to store the texid in the
structure myself except for your suggested 2D OpenGL interface layer as I
happen to need a bit more advanced texture system myself. I’m going to
drift a little off topic to explain what I mean, so the majority of the
list not interested in the mundanity of OpenGL texture management should
probably skip past this part… =)

The system I’m currently toying with is not a lightweight texture manager
to say the least. It supports caching, garbage collection, palette tricks
on textures which have them, and reuploading all textures after a context
switch. Not bad eh? Here’s the structure:

typedef struct texture_s
{
    char        *search;
    char        *name;
    image_t     *img;
    GLuint      texid;
    GLuint      miplevels;
    int         seq;
    Uint32      flags;
} texture_t;

Some of this is pretty obvious, but the stuff that’s not deserves a little
explanation. The name field holds the filename from which this texture
was loaded. The search field is what we actaully feed to our resource
finder function. Basically, it is a relative filename, usually (currently
always) without extension. If we ever need to reload the file only if it
changed (ie, loading a mod or something) we’ll feed search to Image_Find
and compare it to name. No match, reload the texture.

The img field is usually NULL. This is how we do palette tricks quickly,
without fiddling with the disks. Any time an image_t is uploaded, it’s
either freed or pointer is set in the texture_t for garbage collection
reasons. Flags currently holds information about rendering the textures,
but that can be expected to change.

Seq is a refcount, more garbage collection! =) When you begin loading a
map, the global seqnence number is incremented as you’d expect. Each time
a texture is requested, you recheck it as above and update its sequence
number. Just before the game starts, you make a quick run through all of
your textures and clear out the unneeded ones.

Quake 2 uses pretty much the same mechanism. Quake never deletes OpenGL
textures and leaks like a sieve. Quake 3, IIRC, has a shader manager, not
a texture manager. I’d have to look to be sure, but I believe it throws
out all shaders not currently in use and the textures with them. This and
that Q3A shaders are actually compiled at load-time makes a partial
explanation for why it takes so long to load a level in that gaem.

BTW, I do think 2D OpenGL SDL rendering will be problematic when mixed with
OpenGL rendering by the programmer. (SDL-OpenGL states interfering with
programmer-OpenGL states.) So setting up an OpenGL context for your own use
and having SDL render using OpenGL would probably need to be two different
capabilities.

What do you think?

I think you can get away with that actually. Most of the user’s states
can be saved. It will require one of three states though. One, the 2D
functions will push the identity matrix, and save the states, do their
thing, and then restore everything. SLOW! Two, the code can be called
after a slightly more intelligent SDL_GL_Enter2DMode which will save all
of that stuff once only, and you must remember to SDL_GL_Leave2DMode when
you’re done to set everything back to normal. Or three, you can make SDL
perform a bit Do What I Mean-ish and queue the SDL commands until the SDL
function which does the glFinish for you is called, at which point all of
that happens for you.

Being one for moderation, the second option seems correct to me, though
the functions could recognize when they have been called outside of the 2D
mode in which case they’d act like they do in the first option, which
means they’ll work, but slowly. The DWIM approach requires the least
thought on the part of the coder who doesn’t want to learn how to do 2D in
OpenGL properly, but violates the principle of least surprise when you are
expecting a glFinish and glXSwapBuffers and you wind up with something
very different. ;)On Fri, Feb 22, 2002 at 05:16:21PM +0100, Dirk Gerrits wrote:


Joseph Carter Don’t feed the sigs

add a GF2/3, a sizable hard drive, and a 15" flat panel and
you’ve got a pretty damned portable machine.
a GeForce Two-Thirds?
Coderjoe: yes, a GeForce two-thirds, ie, any card from ATI.

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020222/adcce982/attachment.pgp

I was thinking about using a function like: Uint8 CountBits(Uint32
bitarray)

to count the bits in each of the individual masks. That would determine
the

number of bits per color channel. If these do not conform to anything
OpenGL

supports, we can set the field to 0 and stop. If they do conform, we
still

have to check the shifts to check the alignment and order of the color
channels.

BTW, do you happen to know what role {RGBA}loss would play in this?

In this? None at all. {RGBA}shift also has none. You only need the bits
and bytes per pixel and the masks. The masks contain loss and shift info
embedded.

Actually, shift does play a role in figuring out the order of the color
channels.
(RGBA, BGRA, etc) But indeed, that could be done with the masks as well.

And I just figured out the role of loss by reading through the SDL source
code.
Instead of actually counting the bits in the mask, it’s much more efficient
and
easy to do: numRedbits = 8 - Rloss; etc. Because that is all that loss
really is,
the amount of bits that are lost compared to a full 8 bits. (I think the doc
project doesn’t really make this clear.

Ah I think I misunderstood your initial explanation. Yes this does sound
like
a very good idea. But you do realize that that would not decrease the
number

of
converts do you? It just simplifies the matter for the OpenGL coder
because

now IMG_Load (for example) converts the loaded images for him. But I
guess

the
converts can’t be avoided so having OpenGL do them would be cool.

Yes it would. I am assuming from the outset that SOME image processing is
necessary regardless of the format on disk. I did not count it as a third
conversion because that conversion is overshadowed by the disk I/O. The
time spent waiting for the disk to give you the file is greater than that
to convert the image, so it’s essentially inconsiquential.

That is probably true. But converts can still take dozens of seconds. For
lots of textures, this is not to be underestimated.

And the merit of having SDL_Surfaces work in OpenGL transparently cannot
easily be overlooked. I don’t see a reason to store the texid in the
structure myself except for your suggested 2D OpenGL interface layer as I
happen to need a bit more advanced texture system myself. I’m going to
drift a little off topic to explain what I mean, so the majority of the
list not interested in the mundanity of OpenGL texture management should
probably skip past this part… =)

Well for your own OpenGL stuff with texture management and all that you
might not even want to keep the SDL_Surface at all. :slight_smile: But the texid storing
is something I only meant for the OpenGL-SDL rendering anyway. It should
not even be used by the programmer, only by SDL.

The system I’m currently toying with is not a lightweight texture manager
to say the least. It supports caching, garbage collection, palette tricks
on textures which have them, and reuploading all textures after a context
switch. Not bad eh? Here’s the structure:

typedef struct texture_s
{
char *search;
char *name;
image_t *img;
GLuint texid;
GLuint miplevels;
int seq;
Uint32 flags;
} texture_t;

Some of this is pretty obvious, but the stuff that’s not deserves a little
explanation. The name field holds the filename from which this texture
was loaded. The search field is what we actaully feed to our resource
finder function. Basically, it is a relative filename, usually (currently
always) without extension. If we ever need to reload the file only if it
changed (ie, loading a mod or something) we’ll feed search to Image_Find
and compare it to name. No match, reload the texture.

The img field is usually NULL. This is how we do palette tricks quickly,
without fiddling with the disks. Any time an image_t is uploaded, it’s
either freed or pointer is set in the texture_t for garbage collection
reasons. Flags currently holds information about rendering the textures,
but that can be expected to change.

Seq is a refcount, more garbage collection! =) When you begin loading a
map, the global seqnence number is incremented as you’d expect. Each time
a texture is requested, you recheck it as above and update its sequence
number. Just before the game starts, you make a quick run through all of
your textures and clear out the unneeded ones.

Quake 2 uses pretty much the same mechanism. Quake never deletes OpenGL
textures and leaks like a sieve. Quake 3, IIRC, has a shader manager, not
a texture manager. I’d have to look to be sure, but I believe it throws
out all shaders not currently in use and the textures with them. This and
that Q3A shaders are actually compiled at load-time makes a partial
explanation for why it takes so long to load a level in that gaem.

Pretty interesting system you have there. Mine is currently far less complex
but I plan to create a new system for my next project. It’s going to be very
object-oriented. For example, I already have classes like this:

(arrows mean inheritance, classname in a classname means containment)
(I hope you’re viewing this with a fixed pitch font :wink:

±CTexture----------+ ±CInternalTexture------+
| CInternalTexture* | | GLuint texID; |
±------------------+ | char filename; |
^ ^ | Uint32 refcount; |
| | ±----------------------+
| ±----------------+
| |
±CTexture2D-------+ ±CTextureCubemap-+
| | | |
±-----------------+ ±----------------+
^ ^
| |
| ±-------------------------------+
| |
±CTexture2D_Renderable-------+ ±CTexture2D_Renderable_Impl-+
| CTexture2D_Renderable_Impl
| | |
±----------------------------+ ±---------------------------+
^ ^
| |
±-------------------+ |
| |
±CTexture2D_Renderable_WGL_pbuffer-+ ±CTexture2D_Renderable_generic-+
| | | |
±----------------------------------+ ±------------------------------+

etc.

The CInternalTextures are going to be managed by a texture manager not
unlike yours. Meshes will probably either contain a shader class, or a
texture class. If I chose for a shader class, then the shader class will
contain one or more texture classes. The texture classes all have loading,
creation, deletion, etc functions. These are to be called by the texture
manager only I guess. Anyway, the system’s design still needs work, but
to me it looks very promising. :slight_smile: (And the CTexture2D class is already
working great.)

BTW, I do think 2D OpenGL SDL rendering will be problematic when mixed
with

OpenGL rendering by the programmer. (SDL-OpenGL states interfering with
programmer-OpenGL states.) So setting up an OpenGL context for your own
use

and having SDL render using OpenGL would probably need to be two
different

capabilities.

What do you think?

I think you can get away with that actually. Most of the user’s states
can be saved. It will require one of three states though. One, the 2D
functions will push the identity matrix, and save the states, do their
thing, and then restore everything. SLOW! Two, the code can be called
after a slightly more intelligent SDL_GL_Enter2DMode which will save all
of that stuff once only, and you must remember to SDL_GL_Leave2DMode when
you’re done to set everything back to normal. Or three, you can make SDL
perform a bit Do What I Mean-ish and queue the SDL commands until the SDL
function which does the glFinish for you is called, at which point all of
that happens for you.

Being one for moderation, the second option seems correct to me, though
the functions could recognize when they have been called outside of the 2D
mode in which case they’d act like they do in the first option, which
means they’ll work, but slowly. The DWIM approach requires the least
thought on the part of the coder who doesn’t want to learn how to do 2D in
OpenGL properly, but violates the principle of least surprise when you are
expecting a glFinish and glXSwapBuffers and you wind up with something
very different. :wink:

That first option is what I meant with problematic. :slight_smile:

The second seems very nice indeed.

The third would require far too many changes to SDL’s internals I’d think.

add a GF2/3, a sizable hard drive, and a 15" flat panel and
you’ve got a pretty damned portable machine.
a GeForce Two-Thirds?
Coderjoe: yes, a GeForce two-thirds, ie, any card from ATI.

Haha cool quote. BTW, a standard Radeon (not the 8500) is more like a
GeForce 2 1/2 since it has 3 texture units. :stuck_out_tongue_winking_eye:

Dirk Gerrits

PS I think we’re going a bit off-topic here. Maybe you would like to
continue
our discussion through normal email instead of the mailing-list?
My email is @Dirk_Gerrits if you’re interested.

That is probably true. But converts can still take dozens of seconds. For
lots of textures, this is not to be underestimated.

But no longer than the actual disk loads. If the converts will take
several seconds, then the disk loads also took several seconds, and if
they were happening at the same time, the net time spent converting that
was not also taken up by loading is virtually none. Since the time to
load cannot be avoided, and must be taken as a given or ignored outright,
depending on your design criteria and needs.

And the merit of having SDL_Surfaces work in OpenGL transparently cannot
easily be overlooked. I don’t see a reason to store the texid in the
structure myself except for your suggested 2D OpenGL interface layer as I
happen to need a bit more advanced texture system myself. I’m going to
drift a little off topic to explain what I mean, so the majority of the
list not interested in the mundanity of OpenGL texture management should
probably skip past this part… =)

Well for your own OpenGL stuff with texture management and all that you
might not even want to keep the SDL_Surface at all. :slight_smile: But the texid storing
is something I only meant for the OpenGL-SDL rendering anyway. It should
not even be used by the programmer, only by SDL.

Indeed, for the sake of prudence in memory management, RGBA SDL_Surface’s
should almost universally be freed upon upload.

Pretty interesting system you have there. Mine is currently far less complex
but I plan to create a new system for my next project. It’s going to be very
object-oriented. For example, I already have classes like this:

[…]

The CInternalTextures are going to be managed by a texture manager not
unlike yours. Meshes will probably either contain a shader class, or a
texture class. If I chose for a shader class, then the shader class will
contain one or more texture classes. The texture classes all have loading,
creation, deletion, etc functions. These are to be called by the texture
manager only I guess. Anyway, the system’s design still needs work, but
to me it looks very promising. :slight_smile: (And the CTexture2D class is already
working great.)

Be wary of overhead in such a system. OpenGL works as well as it does
partially because it keeps things simple. It’s a state machine, as is my
texture system. The merits of one design over another are beyond the
scope of this list certainly, but if you’re careful about how you do the
implementation it could work out fine for most purposes. The exceptions
are a far more interesting topic than we can get away with before we’re
both summarily clubbed senseless for lack of topicality. And besides,
it’s mostly academic anyway.

I think you can get away with that actually. Most of the user’s states
can be saved. It will require one of three states though. One, the 2D
functions will push the identity matrix, and save the states, do their
thing, and then restore everything. SLOW! Two, the code can be called
after a slightly more intelligent SDL_GL_Enter2DMode which will save all
of that stuff once only, and you must remember to SDL_GL_Leave2DMode when
you’re done to set everything back to normal. Or three, you can make SDL
perform a bit Do What I Mean-ish and queue the SDL commands until the SDL
function which does the glFinish for you is called, at which point all of
that happens for you.

Being one for moderation, the second option seems correct to me, though
the functions could recognize when they have been called outside of the 2D
mode in which case they’d act like they do in the first option, which
means they’ll work, but slowly. The DWIM approach requires the least
thought on the part of the coder who doesn’t want to learn how to do 2D in
OpenGL properly, but violates the principle of least surprise when you are
expecting a glFinish and glXSwapBuffers and you wind up with something
very different. :wink:

That first option is what I meant with problematic. :slight_smile:

The second seems very nice indeed.

The third would require far too many changes to SDL’s internals I’d think.

add a GF2/3, a sizable hard drive, and a 15" flat panel and
you’ve got a pretty damned portable machine.
a GeForce Two-Thirds?
Coderjoe: yes, a GeForce two-thirds, ie, any card from ATI.

Haha cool quote. BTW, a standard Radeon (not the 8500) is more like a
GeForce 2 1/2 since it has 3 texture units. :stuck_out_tongue_winking_eye:

I bought a Radeon 64 VIVO before the new ones were available. The GF3 was
still a thing only the celebrities like John Carmack had the luxury of
playing with. I was led to believe the Radeon X11 drivers were stable and
that the card matched the GF2 performance nicely. It cost more, but the
extra cost was worth it to support free software over NVIDIA’s refusal to
even budge on the open drivers issue.

The performance of the Radeon 64 stank in every test I gave it. My old
Voodoo3 had been faster(!) The drivers were buggy as hell in win32 and it
turns out that the card’s performance sucked even more in Linux without
T&L support. Additionally, the card refused to work on my AMD system,
though it ran on both Intel machines fine. Perhaps the AMD thing was the
recently publicized cache coherency issue, but according to my cpuinfo,
that’s not the case. The DRI team wasn’t very responsive and I very
quickly got very frustrated with not getting so much as an acknowledgement
that there was a problem for several weeks. When I finally got it, it
came from outside the DRI project.

Needless to say, ATI’s hardware is unimpressive. Their drivers were less
so in win32. The business practices embodied in those drivers has shown
itself less so still. And I haven’t got an ounce of faith left in DRI
given that the same exact people are still in control and I still receive
requests for help getting things set up and identifying problems since no
real help can be had from the DRI team directly.

And now I’m going even further off topic. =p I feel that as a software
developer, it is my responsibility to handle bug reports in that software
in some manner. If the problem is known, at least I should reply saying
so, explain why if possible, and let the submitter know what to look for
in CVS commits as far as a fix goes. If not, it’s my job to help track
down the problem. If I can’t do that, I need to do whatever I can to help
the user do it. And as project leader, it’s also my responsibility to
make sure that these issues don’t slip through the cracks in case nobody
in the development team was able to give the issue the attention it
deserves.

Now granted, when dealing with volunteers, support mechanisms often take
back seat to the code itself. But DRI isn’t something a bunch of geeks
just hacked together in late night coding sessions when they should have
been sleeping so they could work on their real jobs. Most of the key
players are employed to work on it. And when they lost that employment,
they went out and started their own company to keep getting paid to work
on it. Maybe this indicates they need to hire someone to help with the
support issues?

Ah well, Enough rant for one email. Here’s hoping for positive changes in
the future of Linux graphics development, be it through miraculous NVIDIA
execs suddently getting thwacked with clue sticks or DRI suddenly becoming
stable, fast, and well-supported.

PS I think we’re going a bit off-topic here. Maybe you would like to
continue
our discussion through normal email instead of the mailing-list?

Yeah, we’re headed that way pretty fast. This should probably be the last
list posting, I think I left anything resembling topicality behind up
there somewhere. =pOn Sat, Feb 23, 2002 at 10:55:00AM +0100, Dirk Gerrits wrote:


Joseph Carter Caffiene is a good thing

americans are wierd…
californians even weirder
xtifr has a point …

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020223/63f25399/attachment.pgp

[…]

So when an OpenGL display mode gets initialized, all SDL hardware
surfaces should be created as 2D textures and blits would be done
with textured quads. Software surfaces should stay in system memory
and be copyed into textures when blitting.

That’s what I have in mind too. Making OpenGL a backend (or target)
just like X11, svgalib or fbcon.

Actually I think there should be some changes, maybe a whole different
library, for SDLonOpenGL. A lot of SDL is not needed when using SDL for
OpenGL only, and some other things are. IMHO the whole philosophy for
hardware accelerated 2D through OpenGL is quite different from current
SDL philosphy.

Maybe I’m missing something, but I haven’t noticed any major logical
differences between using say, h/w accelerated DirectDraw OpenGL when
hacking glSDL. Besides, the software blitting code and all the other
stuff is still needed, so I don’t really see much point at all in forking
SDL.

…unless we’re talking about an “SDL” version with major API changes. In
that case, I think you’re missing the whole point. SDL-on-OpenGL is all
about boosting the performance of applications using the SDL API
(preferably without even recompiling them), and not at all about creating
a cumbersome and relatively inefficient OpenGL wrapper for 2D rendering.

If I’m going to support anything other than an OpenGL backeng for SDL, it
would be a much higer level graphics engine that supports OpenGL as a
native rendering target, with a major part of the added blending and
transformation possibilities exposed - but that’s a completely different
thing. (I’m basically talking about a future version of the graphics
engine I use in Kobo Deluxe - and that’s a design where you throw
graphics and coordinates in, and get smooth, frame rate interpolated
animation out. It’s an engine rather than an imperative rendering API -
more restrictive from the application POV, but much more flexible WRT
internal implementation.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 21 February 2002 14:11, Timo K Suoranta wrote:

So when an OpenGL display mode gets initialized, all SDL hardware

surfaces

should be created as 2D textures and blits would be done with
textured quads. Software surfaces should stay in system memory and
be copyed into textures when blitting.

That’s what I have in mind too. Making OpenGL a backend (or target)
just like X11, svgalib or fbcon.

Actually I think there should be some changes, maybe a whole
different library, for SDLonOpenGL. A lot of SDL is not needed when
using SDL for OpenGL only, and some other things are. IMHO the whole
philosophy for hardware accelerated 2D through OpenGL is quite
different from current SDL philosphy.

Here are some ideas to implement SDL graphics techniques in OpenGL:

SDL_HWSURFACE: Pixels are stored as a 2D texture in video memory. The
other attributes remain in systemmemory.

Locking -> a glGetTexImage(GL_TEXTURE2D, …)
Unlocking -> glTex(Sub)Image2D(…)

I simply keep the original software surfaces as well, to avoid the
dreadful overhead of those “glGet*” calls… Locking is basically a NOP,
whereas unlocking sends the (modified) surface off to the video card.
Simpe, and every bit as fast as the procedural textures used in 3D games.
(Well, at least as long as you update the whole texture every time… SDL
API limitation - but glSDL actually has an extension to deal with that.)

SetColorKey -> Would require getting the RGB texture image, creating
the alpha components based on the colorkey and then storing the texture
back in OpenGL. Blitting with a color key is then the same as blitting
with alpha. This is probably one of the bigger ‘philosophy
differences’.

Yep. glSDL does it that way, and I haven’t seen any problems with it -
although the current implementation might confuse some applications, as
SDL_SetColorKey() actually returns with the surface converted into
RGBA… heh

Blitting -> Simply drawing a textured quad. If an alpha color / an
alpha channel / a colorkey is used, blending should be enabled. I’m not
100% sure which blending equation would best match SDL’s though. I
think it would be glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALHPA)

Yep.

Depth testing should of course be disabled and the view should be
orthographic.

Yep.

It would benefit performance if the blits were qeued up.

Yeah. I’ve considered that for glSDL (there’s a comment on it somewhere
in the code), but I have yet to implement it. I was going to do it in
SDL_FlipSurface(), but maybe there are better ways… How about also
doing it when the “blit queue” is about to overflow, instead of
dynamically enlarging it the queue? Dunno…

However, do note that this would bring in serious sync issues! Surface
locking must be carefully integrated with this subsystem, or procedural
surfaces/textures will render incorrectly under some circumstances.

SDL_SWSURFACE: Same as in normal SDL. However, when it is being
blitted, it is temporarily stored in texture memory. (This is most
likely very very slow.)

That’s what glSDL does, and it doesn’t seem to be that slow - although
addmittedly, I haven’t really tested running broken applications that
never use SDL_DisplayFormat()… It might be worse than I think. :slight_smile:

An alternative is glDrawPixels for blitting it.
(Also VERY slow.)

Yeah, that’s probably slower.

As can be seen, there are indeed some issues that would need resolving,
but I don’t think that the difference in philosophy is so big that it
would require a different library.

If it was, there would be no glSDL - and there would be no reason to
write any SDL-on-OpenGL library at all, IMHO.

I do believe though, that SDL should
not choose for OpenGL as a backend for itself, like it does with X11,
DX, etc. A good idea IMHO would be to have the programmer choose for
OpenGL 2D rendering, perhaps with a new flag in the SDL_SetVideoMode
function,

Yes, it probably has to be done that way - but at the same time, it’s in
conflict with the basic motivation for this project: speeding up SDL 2D
applications. There must be a way to run any SDL application on the
OpenGL target. Or rather, there should be a way to disable the OpenGL
target for applications that don’t run well with it.

like SDL_OPENGLBLIT2 or something.

SDL_GLSDL? :wink:

Also, since SDL 1.3 will
be a near full rewrite according to the FAQ, maybe it’s architecture
will be changed to accomodate for OpenGL 2D rendering more easily?

The problem is that the few parts that differ in important ways do so in
ways that make it very hard to use the same API without ending up with
massive overhead for one of the targets. There are simple hacks like
specifying a rect or list of rects when locking a surface, but that only
covers some of the software rendering situations.

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 21 February 2002 14:56, Dirk Gerrits wrote:

[…]

Also I don’t have any problem with killing off the current
OPENGLBLIT. The docs won’t say “This option is kept for compatibility
only, andis not recommended for new code.” for nothing. :wink: But I’d
also be in favor of a brand new OPENGLBLIT as a 2D renderer as is
being discussed in this thread.

I’m not convinced that’s a good idea yet. The software code in SDL is
not much bloat in OpenGL if you don’t use it, but this could prove to
be a different story all together. In order for this to be effective,
SDL will have to incorporate an OpenGL texture manager and cope with
hardware limits in the OpenGL drivers and older hardware (some cards
have 256x256 max limit on textures, some have a limited number of
texture slots, etc) and it all sounds a bit high level for SDL.

Either way, glSDL already does most of this.

This would best be done by a companion lib I think.

Two problems:
* Doing it that way would result in a messy API, that
effectively duplicates most of the SDL calls.

* It would be quite pointless, as you'd still have to
  *port* SDL applications to use it. Why not just do
  it right, and port them to native OpenGL?

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Friday 22 February 2002 06:52, Joseph Carter wrote:

[…]

Supporting older cards and drivers could be a real problem though.

It always is. =p

Very true. :slight_smile:

BTW, I do think 2D OpenGL SDL rendering will be problematic when mixed
with OpenGL rendering by the programmer. (SDL-OpenGL states interfering
with programmer-OpenGL states.) So setting up an OpenGL context for
your own use and having SDL render using OpenGL would probably need to
be two different capabilities.

What do you think?

Well, mixed 2D and 3D would at least require that both SDL and the
application are aware that the other is using OpenGL as well - I don’t
think it can be done transparently without some overhead. Don’t know how
much, but we’re basically talking about the entire OpenGL state.

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Friday 22 February 2002 17:16, Dirk Gerrits wrote: