David Olofson wrote:
Anyway, a 2048x2048 RGBA8 texture is 16 MB, which is rather big to
use as allocation granularity for many of the cards that support
2048x2048 textures. (Even some 16 MB cards support that large
textures, though it’s obviously not physically possible to keep
one in VRAM together with the frame buffer. It’s just supported
because textures can have less than 32 bpp, and/or because there
are versions of the card with more VRAM.)
Maybe it would make sense to have some kind of internal limit
here, maybe related to the display resolution or something…
Tiles larger than the screen don’t make much sense, even for huge
surfaces.
Sure. Hey, some backends don’t even support surfaces larger than
the screen
Ouch! You do mean hardware surfaces, right?
Anyway, this internal limit (or the max texture size) does not limit
the size of glSDL “hardware” surfaces, so it’s not quite the same
thing. (Tiling is applied on top of this, to make large surfaces out
of parts of textures.)
If
they do anything, it would be preventing OpenGL from swapping
parts of a huge surface (of which only a part at a time is used)
out of VRAM, to leave room for other data that is actually used
every frame.
Swapping textures from/to video memory has a higher cost than
binding new textures.
Absolutely - but if you’re out of VRAM, you’re at least better off
just swapping every now and then as you scroll across that huge
surface, than swapping all textures every fram, just because the
huge one has to fit.
I remember when one of my programs ran out of
video memory, and the 3D performance really suffered. Increasing
the granularity only makes this problem worse (to the point that
the video ram might get re-filled many times per frame).
Strange. One would think that the textures that have been unused the
longest are kicked first when others need to be swapped in. That
would handle scrolling over gigantic tiled surfaces, as well as
moving around in a 3D world with tons of textures just fine.
Then again, texture binding has a significant cost on some cards,
That would have to be benchmarked
Right.
I never found texture binding
cost to be that high,
Dito. It’s been insignificant on the cards I’ve messed with so far.
especially if you compare it to the cost of
swapping textures from video memory. Sure, the texture binding time
is driver-dependent, but uploading a texture to video ram will
always kill your performance.
Yeah.
Just to make things clear; what I’m saying is that allowing large
surfaces to make use of the max texture size means you risk forcing
OpenGL to somehow have the whole surface available, even if only a
fraction of it is visible in each frame. (I doubt your average video
card does partial caching of textures.)
Restricting the max texture size indeed means you may get some more
binding “overhead”, but it allows OpenGL to drop the invisible parts
of huge surfaces from VRAM until they become visible.
Or maybe this just isn’t implemented in your average OpenGL driver? In
that case, I guess we have to implement it in glSDL to make low
memory 3D cards usable with apps that use lots of surfaces, but don’t
blit all of them in every frame.
which makes this a balance act. Limit max texture size to twice
the size of the screen? Limit it so one texture uses less than
30% of the available VRAM? Other ideas?
You could use VRAM size but… there is no portable way that I know
of to find the video ram size in OpenGL :-/
Exactly…
Anyway, before starting using heuristics, you need some real-world
measures like the statistical distribution of the surfaces sizes
and such. I once tried to find an “average” surface size by running
different programs and printing statistics, just to find that each
program is really different : for example some allocate only small
surfaces, others allocate random sizes, others keep a copy of the
background… (for the record, the larger surfaces I could find
were the size of the screen, and the average surface dimension (x
or y) was around 100).
I would guess that cover’s most apps, but we do have to consider SFont
and the like, which are used really rather frequently, and tend to
generate insanely wide surface.
That said, even the current glSDL implementation is virtually
unlimited in that regard, as long as font height <= max texture size.
So, maybe it’s not worth it to consider the few applications that use
huge surfaces. The will run just fine on reasonably modern video
cards, and they should even run ok on anything that can DMA textures
from system ram. (That is, any AGP card and most modern PCI cards,
AFAIK.) Let’s optimize that case if it actually turns out to be a
problem.
Anyway, I’m not sure this is a big deal, as there are OpenGL
extensions called “NV_texture_rectangle” and
"EXT_texture_rectangle" that do what their names say, ie prevent
applications from wasting memory for non 2^n textures. So you could
just wait for it to become part of the standard if you don’t want
to solve a NP-complete problem (I for one don’t
Right, but I suspect both SDL and glSDL might be obsolete before every
card in use supports those extensions. Is it in OpenGL 1.4? Not
good enough, as lots of cards don’t have, and probably never will
have 1.4 drivers. Let’s not even think about 2.0…
Seriously, it would be nice to make use of that feature where it’s
available, but as it is, I think it’s just a cool performance hack
that may work for some of the potential glSDL users. It should be
simple, though, so we might as well throw it in, once we have the
required, portable, minimal system requirement stuff working.
//David Olofson - Programmer, Composer, Open Source Advocate
.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
— http://olofson.net — http://www.reologica.se —On Thursday 13 November 2003 00.24, Stephane Marchesin wrote: