Video memory

I am using SDL and open GL and I want to know :
if my 3D card have 32 video ram does it mean that I can’t load more than 32
mo of textures ?? !!
thks

No.
You RAM video usage will depend on the
selected resolution.

For most cards the RAM video is shared by
he frame buffer, Z Buffer and Texture
memory. So it will depend on your current
settings.

Paulo Pinto

-----Original Message-----From: sdl-admin@libsdl.org [mailto:sdl-admin at libsdl.org]On Behalf Of
IronRaph at aol.com
Sent: terca-feira, 23 de Abril de 2002 10:53
To: sdl at libsdl.org
Subject: [SDL] video memory

I am using SDL and open GL and I want to know :
if my 3D card have 32 video ram does it mean that I can’t load more than 32
mo of textures ?? !!
thks


Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

and can I load for example 64 MO of textures ???
I don’t use all the 64 mo of texture in the same image :wink:
thx

You can’t put 10 pounds of potatos in a 5 pound sack.

Let’s say you have 32 MB of video ram. A 32 bit per pixel display format
at 1024x768 used 4 bytes per pixel so it takes up 4x1024x768 = 3 mega
bytes of ram. So, that cuts down your available video ram from 32 to 29
megabytes. Now if that is double buffered then you are down to 26
megabytes of free space. Now if you use a 24 bit z-buufer you lose
another 2.25 megabytes of video ram. So now you have at most 23.75
megabytes left. Now I said AT MOST because there is some overhead in
keeping track of things and when you start loading textures you can’t be
sure that the texture managment software will do a perfect job of
storing the textures. And, you can’t be sure exactly how they are stored
(at least not without reading the source for the driver). So, depending
on your video mode and driver you might be able to store 23.75 megabytes
of tiles in your video ram, or you might not. Also, it is possible that
you actually have TWO z-buffers (unlikely but possible) in which case
you are actually only have 21.5 MB of video memory left.

There is also a chance that the OS/Driver has stored things like fonts
in the video memory and you don’t get to use that memory. So, there
might be video memory that is already in use that you don’t know about

It gets more confusing that that. If you try to store more textures than
your video card can hold then the driver may decide to store some of the
textures in main memory and move them to the video card when they are
needed. BTW, that’s one of the main reasons for the AGP bus. If the
driver does that then your texture memory includes the empty space on
your card plus the amount of AGP memory that the driver can use. Storing
textures in AGP memory may cause a sharp reduction in performance as the
driver has to load the texture into the framebuffer to render it. I say
"may" because if your usage pattern fits the swapping algorithm used by
the driver then you might not see a performance hit. Sometimes you get
lucky.

Personally, I’d figure you can safely count on using about three
quarters of the memory left after taking out the size of the video
buffers you are using. In this example, where we have 23.75 MB left I
would figure I could safely use 18 to 20 MB of video memory for textures
before I started to see performance problems. YMMV All of this depends
on how your video card, driver, and OS handle allocation of video
buffers and texture memory. The only way to be sure is to test.

BTW, if you are building an application just for you then test and tune
for just your machine. If you are planning to let other people use it
then leave in some performance margins so that it is likely to work on a
lot of different machines.

	Hope That Helps,

		Bob Pendleton

IronRaph at aol.com wrote:> I am using SDL and open GL and I want to know :

if my 3D card have 32 video ram does it mean that I can’t load more than
32 mo of textures ?? !!
thks


±-----------------------------------------+

  • Bob Pendleton, an experienced C/C++/Java +
  • UNIX/Linux programmer, researcher, and +
  • system architect, is seeking full time, +
  • consulting, or contract employment. +
  • Resume: http://www.jump.net/~bobp +
  • Email: @Bob_Pendleton +
    ±-----------------------------------------+

thanks!!!
it is for www.mountain-tycoon.fr.st :wink:

It gets more confusing that that. If you try to store more textures than
your video card can hold then the driver may decide to store some of the
textures in main memory and move them to the video card when they are
needed. BTW, that’s one of the main reasons for the AGP bus. If the
driver does that then your texture memory includes the empty space on
your card plus the amount of AGP memory that the driver can use. Storing
textures in AGP memory may cause a sharp reduction in performance as the
driver has to load the texture into the framebuffer to render it. I say
"may" because if your usage pattern fits the swapping algorithm used by
the driver then you might not see a performance hit. Sometimes you get
lucky.

So do you have any control over what goes into video memory and what
doesn’t? Does it swap textures in and out of memory on it’s own? Can you
like lock in specific textures so it won’t swap them out? I’ve always been
curious just how I would know when I run out of video memory, if the upload
would fail, or if it would swap stuff around, etc.

-Jason

----- Original Message -----
From: bob@pendleton.com (Bob Pendleton)
To:
Sent: Tuesday, April 23, 2002 10:20 AM
Subject: Re: [SDL] video memory

Jason Hoffoss wrote:> ----- Original Message -----

From: “Bob Pendleton”
To:
Sent: Tuesday, April 23, 2002 10:20 AM
Subject: Re: [SDL] video memory

It gets more confusing that that. If you try to store more textures than
your video card can hold then the driver may decide to store some of the
textures in main memory and move them to the video card when they are
needed. BTW, that’s one of the main reasons for the AGP bus. If the
driver does that then your texture memory includes the empty space on
your card plus the amount of AGP memory that the driver can use. Storing
textures in AGP memory may cause a sharp reduction in performance as the
driver has to load the texture into the framebuffer to render it. I say
"may" because if your usage pattern fits the swapping algorithm used by
the driver then you might not see a performance hit. Sometimes you get
lucky.

So do you have any control over what goes into video memory and what
doesn’t? Does it swap textures in and out of memory on it’s own? Can you
like lock in specific textures so it won’t swap them out? I’ve always been
curious just how I would know when I run out of video memory, if the upload
would fail, or if it would swap stuff around, etc.

-Jason

It depends entirely on the driver. Having written drivers I can tell you
that the driver will try to do the best job that it can and still run
the program. Usually it will keep everything in video memory if it can
and them start using external memory when it has to. Usually performance
is tested using a set of common popular programs and the driver is tuned
to work best with programs with usage patterns similar to those.

In my opinion you should budget the memory and just stick with your
budget. I find that I can get code to work on a range of machines by
doing my testing on a machine at the bottom end of that range. So, if
your target is machines with 500MHz processors with a fairly crappy 16Mb
video cards then that is what you should test on, not that 2GHz machine
with the 64M Titanium GeForce4 that you use bought for playing games and
impressing folks at LAN parties :slight_smile: Always design for a target machine.

	YMMV
			Bob P.

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-----------------------------------------+

  • Bob Pendleton, an experienced C/C++/Java +
  • UNIX/Linux programmer, researcher, and +
  • system architect, is seeking full time, +
  • consulting, or contract employment. +
  • Resume: http://www.jump.net/~bobp +
  • Email: @Bob_Pendleton +
    ±-----------------------------------------+

It depends entirely on the driver. Having written drivers I can tell you
that the driver will try to do the best job that it can and still run
the program. Usually it will keep everything in video memory if it can
and them start using external memory when it has to. Usually performance
is tested using a set of common popular programs and the driver is tuned
to work best with programs with usage patterns similar to those.

Hmm, do they do things like tracking how often textures are used and biasing
against swapping it out and stuff like that? Or is it not quite that
advanced? Mostly curious really. In the end, I think it’s always wisest to
handle as much of things yourself as you can, because you know your needs
best for one thing, and because of the old saying “if you want something
done right, do it yourself”. Video drivers probably are quite good, but
just from experience, I try to limit my reliances as much as I can because
I’ve been burned too many times already. :slight_smile: Better safe than sorry.

In my opinion you should budget the memory and just stick with your
budget. I find that I can get code to work on a range of machines by
doing my testing on a machine at the bottom end of that range. So, if
your target is machines with 500MHz processors with a fairly crappy 16Mb
video cards then that is what you should test on, not that 2GHz machine
with the 64M Titanium GeForce4 that you use bought for playing games and
impressing folks at LAN parties :slight_smile: Always design for a target machine.

Haven’t got it yet, but soon, heh. I agree with you. I really don’t see
myself ever exceeding the limits, but you just never know what might happen
down the road.

-Jason

----- Original Message -----
From: bob@pendleton.com (Bob Pendleton)
To:
Sent: Tuesday, April 23, 2002 12:10 PM
Subject: Re: [SDL] video memory

So do you have any control over what goes into video memory and what
doesn’t? Does it swap textures in and out of memory on it’s own? Can you
like lock in specific textures so it won’t swap them out? I’ve always
been
curious just how I would know when I run out of video memory, if the
upload
would fail, or if it would swap stuff around, etc.

-Jason

If you’re working within OpenGL, I recommend you look at:

glAreTexturesResident() - may or may not be useful, it has conditions upon
providing useful info
glPrioritizeTextures() - change priorities for a series of unbound texture
objects
glTexParameterf() - change priority for a bound texture object
glTexSubImage*() - reuse a texture that you know is resident by manually
loading over it

You can also choose not to use MIP maps, as well as do your own application
side MIP mapping for reduced video RAM usage.

There is also some form of a proxy texture facility to test if there is
storage for a texture in video RAM that you are about to upload- but I’ve
not used that…

-Blake

thx

ok and do you think that games are using more than the video memory ?

:slight_smile: ok

Jason Hoffoss wrote:> ----- Original Message -----

From: “Bob Pendleton”
To:
Sent: Tuesday, April 23, 2002 12:10 PM
Subject: Re: [SDL] video memory

It depends entirely on the driver. Having written drivers I can tell you
that the driver will try to do the best job that it can and still run
the program. Usually it will keep everything in video memory if it can
and them start using external memory when it has to. Usually performance
is tested using a set of common popular programs and the driver is tuned
to work best with programs with usage patterns similar to those.

Hmm, do they do things like tracking how often textures are used and biasing
against swapping it out and stuff like that? Or is it not quite that
advanced? Mostly curious really. In the end, I think it’s always wisest to
handle as much of things yourself as you can, because you know your needs
best for one thing, and because of the old saying “if you want something
done right, do it yourself”. Video drivers probably are quite good, but
just from experience, I try to limit my reliances as much as I can because
I’ve been burned too many times already. :slight_smile: Better safe than sorry.

The ones I worked on used LRU (Least Recently Used) algorithm. Tried
many others, but they all failed for some test case. Generally the goal
is to silently degrade performance rather than fail. You always have to
consider what happens when someone starts for copies of a program, each
running in a different window, each of which thinks they have all the
memory in the world… Wich is better, to have 1 run and the rest fail
to start? Or, to have all of them run a 1/4 the speed? Marketing will
tell you that it is best for all to run at 1/4 speed. At least, that’s
what they always told me. :slight_smile:

		Bob P.

P.S.

It’s been a long time since I did that kind of work. I sure miss it.

[…]

The ones I worked on used LRU (Least Recently Used) algorithm. Tried
many others, but they all failed for some test case. Generally the goal
is to silently degrade performance rather than fail. You always have to
consider what happens when someone starts for copies of a program, each
running in a different window,

Try that on a Matrox G400, any OS, any driver version, and see just how
slow full software emulation can be… heh (Seems to be a h/w issue
with multiple contexts - they’ve known about it for 2 years or so, but
still haven’t fixed it.)

Seriously, does anyone know of any way to get multiple contexts to work
on a G400, on Win32 or Linux? Is it fixed on the G450 and newer cards?

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 23 April 2002 20:43, Bob Pendleton wrote:

You have some control, but it’s generally automagic and best not messed
with. if you glBindTexture it, it should become resident. There is also
glAreTexturesResident if you need to know:

Prototype:
GLboolean glAreTexturesResident (GLsizei n, const GLuint *textures,
GLboolean *residences)

Parameters:
n Number of elements in textures and residences
textures array of texids
residences (modified) array indicating residency status by texid

Returns:
GL_TRUE All texids queried were resident
GL_FALSE One or more texids were not resident, dig through
residences to find out which

Potential errors:
GL_INVALID_VALUE n was negative or an element in textures was 0
or not a valid texid
GL_INVALID_OPERATION called between glBegin/glEnd - don’t do that!

Hmm, I just checked and the manpage takes almost three times as much text
to say … less than the above. =) It also mentions glGetTexParameter
with a parameter name of GL_TEXTURE_RESIDENT which is able to tell you if
the texture which is currently bound is resident or not.

Because of latencies in uploading textures to the card and other
variables, there are two assumptions OpenGL programmers tend to make:

  1. You can’t assume the texture is resident, and you don’t care much
    either way. For games at least, people with lesser hardware will
    benefit more from a lower max texture size or mipmapping.

    In the latter case, you can cut memory usage by 75% immediately by
    mipmapping the texture once before you upload it, but it really boosts
    performance with most hardware. (Realize that a Quake 1 map can use
    upwards of 125 megs of textures after mipmapping! In fact, this is
    the crux of the ATI driver controversy: they did this to boost Quake3
    benchmarks without telling anyone they were doing it or letting them
    turn it off. Bad mojo.)

  2. If you need to do a little CPU math, bind an (already known) texid,
    and draw stuff with it, you should do the bind first. In most
    implementations I have looked at, glBindTexture will make the texture
    resident because of the latency involved in doing so. While the card
    is being fed the texture/mipmaps/etc, do your CPU work, then spit out
    the geometry as fast as you can. (Vertex arrays are your friend!)

A bit off topic, but cramming large bits of geometry with vertex arrays
like that is an optimization case for modern hardware, so it will be fast
on GeForce cards. Oh, and by chance its good for Radeons too. (Now who
here didn’t see that line coming?) Older cards such as those made by 3dfx
generally prefer a bit less geometry in strips. They’ve also got dismal
fillrate by comparison. You can really only optimize for one. This is
why modern games are beginning to write seperate backends for different
hardware. It’s cleaner code that way and lets them decide to optimize a
bit differently on different hardware. The downsides are vast and varied
of course.

Here’s waiting for OpenGL 2.0 - that alone will simplify this mess by
several orders of magnitude. =)On Tue, Apr 23, 2002 at 11:39:11AM -0400, Jason Hoffoss wrote:

So do you have any control over what goes into video memory and what
doesn’t? Does it swap textures in and out of memory on it’s own? Can you
like lock in specific textures so it won’t swap them out? I’ve always been
curious just how I would know when I run out of video memory, if the upload
would fail, or if it would swap stuff around, etc.


Joseph Carter Sanity is counterproductive

knghtbrd: and the meek shall inherit k-mart

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020423/7dba5158/attachment.pgp

The GF4 Titanium has 128M, BTW. :wink:

I’d just like to reiterate what Bob is saying here. It’s great to do all
of your optimizations for that high-end hardware to give early adopters
the experience they’ve paid for, but it’s absolutely essential that you
pick something a bit more sane as a target to get acceptable performance
on and not rest until you get that. You might have to do some tweaking of
settings or even more significant code, but the number of people with one
of those 2GHz machines with GF4’s and scads of RAM are certainly a limited
market.

There are games out there which are really designed for 600MHz machines
with a GeForce2 GTS with moderate resolution and quality settings. I can
cite one immediately: RealMyst. It’s not very popular because frankly it
runs like crap even on a Thunderbird 900 if you turn on any driver frills
at all. This tends to annoy people, a lot.

I think this thread is getting off topic rapidly and will now bow out
before Sam goes postal. =)On Tue, Apr 23, 2002 at 11:10:58AM -0500, Bob Pendleton wrote:

In my opinion you should budget the memory and just stick with your
budget. I find that I can get code to work on a range of machines by
doing my testing on a machine at the bottom end of that range. So, if
your target is machines with 500MHz processors with a fairly crappy 16Mb
video cards then that is what you should test on, not that 2GHz machine
with the 64M Titanium GeForce4 that you use bought for playing games and
impressing folks at LAN parties :slight_smile: Always design for a target machine.


Joseph Carter Now I’ll take over the world

Thunder-: when you get { MessagesLikeThisFromYourHardDrive }
Thunder-: it either means { TheDriverIsScrewy }
or
{ YourDriveIsFlakingOut BackUpYourDataBeforeIt’sTooLate
PrayToGod }

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020423/89eb1fd1/attachment.pgp