[… Even shiny NVIDIA cards don’t support all pixel formats …]
First, I don’t have a shiny NVIDIA card, mine is an ATi Radeon 8500.
Second, no it does probably not support loading of all pixel formats in
hardware. But the driver does and the drivers conversion routines are most-
likely a lot faster than anything you’d write yourself. For example, my
driver is optimized for MMX, 3DNow! and WinXP because I’m running WinXP on
an Athlon. These could be used in the drivers to optimize the conversion to
the max.
Well if my shiny NVIDIA card doesn’t support everything, I know for
certain that your shiny ATI doesn’t either. GeForce two-thirds and
all of that. (Don’t ask, read the sig, my sig picker is an AI.) There is
a potential for driver optimization, though.
You can’t really be expected to figure out the native format the hardware
uses. But by using SDL_ConvertSurface, you’re guaranteeing that all of
your textures will be converted at least once, quite probably twice. That
is simply a waste of CPU.
Quite true, sadly. But if the surface can be loaded to OpenGL without a
ConvertSurface than only the driver will have to perform a convert. (Much
more
efficient.)
Indeed, which was my original point - that doing so is nontrivial to the
point that few who try manage to get it right. Loading textures into
OpenGL is a raindrop in the ocean of OpenGL. I can imagine most getting
frustum calculation or some other complex thing with only a few arbitrary
guidelines to help you figure out what numbers to use. But this isn’t one
of those things.
Seems like a good idea. Although it is relatively easy to write your own
function that changes a pixelformat in such and Uint32.
It’s not really very easy or very efficient. The code is also pretty ugly
looking if it is at all complete.
I didn’t have a compiler handy at the time I wrote that. I don’t know what I
was thinking. Indeed this is not very trivial at all. But I think it can
be
done.
Indeed it’s not.
I was thinking about using a function like: Uint8 CountBits(Uint32 bitarray)
to count the bits in each of the individual masks. That would determine the
number of bits per color channel. If these do not conform to anything OpenGL
supports, we can set the field to 0 and stop. If they do conform, we still
have to check the shifts to check the alignment and order of the color
channels.
BTW, do you happen to know what role {RGBA}loss would play in this?
In this? None at all. {RGBA}shift also has none. You only need the bits
and bytes per pixel and the masks. The masks contain loss and shift info
embedded.
My theory is that it might work well when combined with something like the
first solution. A way to tell SDL that you need it to keep the surfaces
simple combined with an easier way of identifying the format of a simple
surface is exactly what an OpenGL coder needs. Maybe something similar to
the mechanism used now for audio contexts? You tell SDL what you need and
it tries to match it as best it can… Don’t know how you’d handle the
case where you NEED a particular format and can’t get it, though.
Ah I think I misunderstood your initial explanation. Yes this does sound
like
a very good idea. But you do realize that that would not decrease the number
of
converts do you? It just simplifies the matter for the OpenGL coder because
now IMG_Load (for example) converts the loaded images for him. But I guess
the
converts can’t be avoided so having OpenGL do them would be cool.
Yes it would. I am assuming from the outset that SOME image processing is
necessary regardless of the format on disk. I did not count it as a third
conversion because that conversion is overshadowed by the disk I/O. The
time spent waiting for the disk to give you the file is greater than that
to convert the image, so it’s essentially inconsiquential.
And the merit of having SDL_Surfaces work in OpenGL transparently cannot
easily be overlooked. I don’t see a reason to store the texid in the
structure myself except for your suggested 2D OpenGL interface layer as I
happen to need a bit more advanced texture system myself. I’m going to
drift a little off topic to explain what I mean, so the majority of the
list not interested in the mundanity of OpenGL texture management should
probably skip past this part… =)
The system I’m currently toying with is not a lightweight texture manager
to say the least. It supports caching, garbage collection, palette tricks
on textures which have them, and reuploading all textures after a context
switch. Not bad eh? Here’s the structure:
typedef struct texture_s
{
char *search;
char *name;
image_t *img;
GLuint texid;
GLuint miplevels;
int seq;
Uint32 flags;
} texture_t;
Some of this is pretty obvious, but the stuff that’s not deserves a little
explanation. The name field holds the filename from which this texture
was loaded. The search field is what we actaully feed to our resource
finder function. Basically, it is a relative filename, usually (currently
always) without extension. If we ever need to reload the file only if it
changed (ie, loading a mod or something) we’ll feed search to Image_Find
and compare it to name. No match, reload the texture.
The img field is usually NULL. This is how we do palette tricks quickly,
without fiddling with the disks. Any time an image_t is uploaded, it’s
either freed or pointer is set in the texture_t for garbage collection
reasons. Flags currently holds information about rendering the textures,
but that can be expected to change.
Seq is a refcount, more garbage collection! =) When you begin loading a
map, the global seqnence number is incremented as you’d expect. Each time
a texture is requested, you recheck it as above and update its sequence
number. Just before the game starts, you make a quick run through all of
your textures and clear out the unneeded ones.
Quake 2 uses pretty much the same mechanism. Quake never deletes OpenGL
textures and leaks like a sieve. Quake 3, IIRC, has a shader manager, not
a texture manager. I’d have to look to be sure, but I believe it throws
out all shaders not currently in use and the textures with them. This and
that Q3A shaders are actually compiled at load-time makes a partial
explanation for why it takes so long to load a level in that gaem.
BTW, I do think 2D OpenGL SDL rendering will be problematic when mixed with
OpenGL rendering by the programmer. (SDL-OpenGL states interfering with
programmer-OpenGL states.) So setting up an OpenGL context for your own use
and having SDL render using OpenGL would probably need to be two different
capabilities.
What do you think?
I think you can get away with that actually. Most of the user’s states
can be saved. It will require one of three states though. One, the 2D
functions will push the identity matrix, and save the states, do their
thing, and then restore everything. SLOW! Two, the code can be called
after a slightly more intelligent SDL_GL_Enter2DMode which will save all
of that stuff once only, and you must remember to SDL_GL_Leave2DMode when
you’re done to set everything back to normal. Or three, you can make SDL
perform a bit Do What I Mean-ish and queue the SDL commands until the SDL
function which does the glFinish for you is called, at which point all of
that happens for you.
Being one for moderation, the second option seems correct to me, though
the functions could recognize when they have been called outside of the 2D
mode in which case they’d act like they do in the first option, which
means they’ll work, but slowly. The DWIM approach requires the least
thought on the part of the coder who doesn’t want to learn how to do 2D in
OpenGL properly, but violates the principle of least surprise when you are
expecting a glFinish and glXSwapBuffers and you wind up with something
very different. ;)On Fri, Feb 22, 2002 at 05:16:21PM +0100, Dirk Gerrits wrote:
–
Joseph Carter Don’t feed the sigs
add a GF2/3, a sizable hard drive, and a 15" flat panel and
you’ve got a pretty damned portable machine.
a GeForce Two-Thirds?
Coderjoe: yes, a GeForce two-thirds, ie, any card from ATI.
-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020222/adcce982/attachment.pgp