OpenGL textures, where do they go?

Hi list,

I’m using SDL with OpenGL in a game of mine because OpenGL gives great
HW-accelerated performance. Now in order to reuse my old style blitting
methods, I wrote a wrapper which converts SDL_Surface* to GL-Textures
and then blits them.

After the conversion of a SDL_Surface* to a GLuint texture handle, the
original surface can be discarded.

This I fear, very much. Because it means the texture needs to be stored
somewhere. I guess the graphics’ card memory, right? Now what if it
fills up? Will the application crash? Now be able to create any more
textures? Or is it so “smart” that it can temporarily "swap-out"
textures and swap them in on demand? What about thrashing?

SDL_Surfaces are just sooooo much easier :wink:

Greetings,
Johannes

The problems you describe do exist. There is a way to see whether a
texture is “resident” (stored in gfx mem) in OpenGL:

GLboolean glAreTexturesResident ( GLsizei n, GLuint *textures,
GLboolean *residences ) ;

… which return true if all textures you ask about (with n/textures)
are stored in gfx mem.

Note that it only checks if they are stored in gfx mem right now ;
– it does not ask whether they could be stored in gfx mem. In order
to increase the chance that a texture will be stored in gfx mem from
the beginning, set its priority to 1:

GLvoid glPrioritizeTextures ( GLsizei n, GLuint *textures,
GLclampf *priorities ) ;

As you noted, it may become a bit more complex than SDL_Surface handling :slight_smile:
What you would do in your application is:

  1. Create and set parameters for all GL texture objects
  2. Set all priorities to 1
  3. Upload all SDL_Surfaces to them
  4. Check whether all texture objects are resident, if not,
    either decrease the quality of them (bits, remove alpha channel,
    resolution …) or their number. Note that you have to change the
    target format of the texture object, not the SDL_Surface’s
    quality. Then redo step 3,4.

Here’s a reference page about texture objects:

http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node87.html

… and here’s more detail on step 4, changing the “internal format” of
a texture:

http://www.berkelium.com/OpenGL/GDC99/internalformat.html

Good luck!

/OlofOn Thu, 09 Dec 2004 02:16:13 +0100, Johannes Bauer wrote:

Hi list,

I’m using SDL with OpenGL in a game of mine because OpenGL gives great
HW-accelerated performance. Now in order to reuse my old style blitting
methods, I wrote a wrapper which converts SDL_Surface* to GL-Textures
and then blits them.

After the conversion of a SDL_Surface* to a GLuint texture handle, the
original surface can be discarded.

This I fear, very much. Because it means the texture needs to be stored
somewhere. I guess the graphics’ card memory, right? Now what if it
fills up? Will the application crash? Now be able to create any more
textures? Or is it so “smart” that it can temporarily "swap-out"
textures and swap them in on demand? What about thrashing?

SDL_Surfaces are just sooooo much easier :wink:

Greetings,
Johannes


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

As I understand it, when you bind a texture it stays bound until you
delete it or your program destroys the OpenGL context (i.e. quits).
The texture data may live in video memory or it may live in system
memory or it may swap between the two, depending on the OpenGL
implementation and the system you’re running it on. The
implementation/driver should handle all that. In the unlikely event
that you get performance problems, you might have to fiddle with the
texture priorities like Olof suggests to try and get the textures to
stick in video RAM (on those systems that store textures in video RAM
:wink: ).

Cheers,

James

Yes, it all depends on how much graphics he intends to use at for
example “on level” of his game. Of course, if it is a platform game
with tons of animations or non-tiled background graphics etc., he
might well get into trouble… What I would do is decide once and for
all how much gfx mem you are “aiming at” using for one level, like a
minimum requirement (8mb, 32mb or whatever), and do a little math to
check how much gfx you can allow your artists to draw for one level.
Keep in mind that the video buffer(s) take up quite a lot of the gfx
mem space to begin with (eg. 1024 * 768 * 4bytes * 2buffers =
6291456bytes, 6mb!).

cya,

/OlofOn Thu, 9 Dec 2004 09:22:19 +0000, James Arthur wrote:

As I understand it, when you bind a texture it stays bound until you
delete it or your program destroys the OpenGL context (i.e. quits).
The texture data may live in video memory or it may live in system
memory or it may swap between the two, depending on the OpenGL
implementation and the system you’re running it on. The
implementation/driver should handle all that. In the unlikely event
that you get performance problems, you might have to fiddle with the
texture priorities like Olof suggests to try and get the textures to
stick in video RAM (on those systems that store textures in video RAM
:wink: ).

Cheers,

James


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hi list,

I’m using SDL with OpenGL in a game of mine because OpenGL gives great
HW-accelerated performance. Now in order to reuse my old style blitting
methods, I wrote a wrapper which converts SDL_Surface* to GL-Textures
and then blits them.

After the conversion of a SDL_Surface* to a GLuint texture handle, the
original surface can be discarded.

This I fear, very much. Because it means the texture needs to be stored
somewhere.

The texture needs to be stored by OpenGL somewhere whether you delete the
SDL_Surface or not. So if you’re worried about memory usage, there is no
choice but to free the surface.

I guess the graphics’ card memory, right? Now what if it
fills up? Will the application crash? Now be able to create any more
textures? Or is it so “smart” that it can temporarily "swap-out"
textures and swap them in on demand? What about thrashing?

OpenGL will automatically swap textures. Obviously, that will come with a
performance hit. But you should be aware that all the same problems exist
with hardware-accelerated blitting from SDL_Surfaces.

There really isn’t much of a difference here.

cu,
Nicolai
-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20041209/c6343ada/attachment.pgpOn Thursday 09 December 2004 02:16, Johannes Bauer wrote:

Isn’t there something you could do, like have your artists (or
someone)place markers indicating that you’re no longer going to see
texture ‘x’ after this point, and do some dynamic re-prioritizing as
well? So, if my level USES 300 textures or so, but I’m only going to
SEE about 100 or fewer at any given time, you could do that, yes?

Also, would it be worth the extra processing time to do this, do you
think? Or are most OpenGL implementations smart enough to store in
video memory the ones most benefited (at the time) by being there?

–Scott

Olof Bjarnason wrote:> Yes, it all depends on how much graphics he intends to use at for

example “on level” of his game. Of course, if it is a platform game
with tons of animations or non-tiled background graphics etc., he
might well get into trouble… What I would do is decide once and for
all how much gfx mem you are “aiming at” using for one level, like a
minimum requirement (8mb, 32mb or whatever), and do a little math to
check how much gfx you can allow your artists to draw for one level.
Keep in mind that the video buffer(s) take up quite a lot of the gfx
mem space to begin with (eg. 1024 * 768 * 4bytes * 2buffers =
6291456bytes, 6mb!).

cya,

/Olof

I presume they’re swapped out on the same principle as system memory -
if something new needs to be displayed, and there isn’t enough room for
it, get rid of one of the oldest textures that hasn’t been used, or
approximate this anyways (round robin with a “dirty/used” flag).

As such, I’d only worry about this if you’ve allready identified a card
that has major issues with a prototype/model of the problem (since I’m
unaware of any graphics card profilers).

Scott Harper wrote:> Isn’t there something you could do, like have your artists (or

someone)place markers indicating that you’re no longer going to see
texture ‘x’ after this point, and do some dynamic re-prioritizing as
well? So, if my level USES 300 textures or so, but I’m only going to
SEE about 100 or fewer at any given time, you could do that, yes?

Also, would it be worth the extra processing time to do this, do you
think? Or are most OpenGL implementations smart enough to store in
video memory the ones most benefited (at the time) by being there?

–Scott

Olof Bjarnason wrote:

Yes, it all depends on how much graphics he intends to use at for
example “on level” of his game. Of course, if it is a platform game
with tons of animations or non-tiled background graphics etc., he
might well get into trouble… What I would do is decide once and for
all how much gfx mem you are “aiming at” using for one level, like a
minimum requirement (8mb, 32mb or whatever), and do a little math to
check how much gfx you can allow your artists to draw for one level.
Keep in mind that the video buffer(s) take up quite a lot of the gfx
mem space to begin with (eg. 1024 * 768 * 4bytes * 2buffers =
6291456bytes, 6mb!).

cya,

/Olof


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl