Ofcourse relying on any texture size is something you need to do, even
if it’s 32x32 pixels. My question concerned what texture size to depend
on, then, since the relevance can not be avoided. Strange, by the way,
that the G400 would not have hard times with this, because my GeForce 2
card sure does slow down if I keep switching textures for every blit. Not
that this is something that I’d normally do, but it proves that it does
cause a slowdown…
David Olofson wrote:On Thu, 04/07/2002 19:45:42 , Martijn Melenhorst (Prive) wrote:
Hi people,
I am writing this 2D-platform game using OpenGL and stuff, and I want to
store all ‘tiles’ to be ‘blitted’ onto the screen into one big texture.
This way I can use only one texture while drawing the game’s background
which should be really fast. So far, the theory. Now, the question:
I’m not sure if it actually matters that much.
I did some tests with glSDL (which does no texture binding
optimizations at all), and only ended up with slower rendering
on the G400. Texture switches seem to cost virtually nothing on
that card.
That said, it might be that texture binding actually has a
significant cost on older cards - on which you can’t avoid it,
of course. heh
I am thinking about using a 2048x64 or something as this texture, and I
It might be a better idea to stick with square textures, especially
since you seem to worry about older and/or lower end cards…
know that the later graphic cards (like TNT 2, GeForce etc.) will
support this texture size, but Voodoo 1, 2 and 3 will not. So I would
render these cards useless. Does someone have a though regarding a good
average texture size to use to keep maximum compatibility? (256x256,
which is the max of Voodoo as far as I know, will not work for me, I
do need a bigger texture for this.
2048x2048 is the new “standard”, but I’ve seen a few drivers that
are restricted to 1024x1024 for some reason, despite the hardware
supporting 2048x2048.
Anyway, I think relying on any specific texture size being supported
is a bad idea, and I can’t see a real motivation to do it -
especially not when rendering tiled backgrounds. It’s quite trivial
to reduce texture switches to MAX(tiles_on_screen, tile_textures).
The easiest way I can think of would be something like:
for(texture in tile_textures)
{
bound = 0;
for(y in screen_rows)
for(x in screen_columns)
{
if(tiles[map(x, y)].texture == texture)
{
if(!bound)
{
glBindTexture(texture);
bound = 1;
}
render_tile(tiles[map(x, y)], x, y);
}
}
}
Obviously, there are more efficient (and more complicated…)
ways of doing it, but I doubt the overhead is significant unless
you have extremely small tiles.
//David
.---------------------------------------
| David Olofson
| Programmer
`-----> We Make Rheology Real
SDL mailing list
SDL@libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl