Proposal: SDL_GL_VIDEO_RAM

A game I’m working on wants to know how much video RAM is available to
the GL, so I wanted to hide the details inside SDL instead of link
against Xlib directly.

Attached is a patch to implement this as a query to
SDL_GL_GetAttribute(). Currently this is only implemented on X11, and
only in cases where we have a means to determine it (that’s with
Nvidia’s closed source drivers, or DRI on Linux, at the moment).

Mac OS X and Windows could implement this, too, in their own ways.

This isn’t a complete patch, just enough to give you an idea of what I’m
shooting for here.

–ryan.

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed…
Name: SDL-glvideoram-RYAN1.diff
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20061113/1a346d3e/attachment.txt

Hello Ryan,

Monday, November 13, 2006, 7:37:26 PM, you wrote:

A game I’m working on wants to know how much video RAM is available to
the GL, so I wanted to hide the details inside SDL instead of link
against Xlib directly.

Attached is a patch to implement this as a query to
SDL_GL_GetAttribute(). Currently this is only implemented on X11, and
only in cases where we have a means to determine it (that’s with
Nvidia’s closed source drivers, or DRI on Linux, at the moment).

Mac OS X and Windows could implement this, too, in their own ways.

Sounds like a great idea.

Sadly, I’ve found it’s pretty much impossible to do other
than to ask DirectX (which tells you in it’s Caps structures). This
would mean that you couldn’t implement this function on Windows unless
SDL was compiled with the DirectX backend, which is a bit dumb as GL
uses GDI stuff on Windows.

For OS X it’s very easy - simply call CGLDescribeRenderer()
with the kCGLRPTextureMemory property. Or if you want to use AGL, then
use aglDescribeRenderer() with the AGL_TEXTURE_MEMORY property :)–
Best regards,
Peter mailto:@Peter_Mulholland

A game I’m working on wants to know how much video RAM is available to
the GL, so I wanted to hide the details inside SDL instead of link
against Xlib directly.

Isn’t it generally difficult to get accurate measures of how much
video memory is available? Especially with all those shared-memory
video adapters out there; some even converts more system memory to
video memory as it becomes needed on-the-fly. That is, you might
experience to have 16 MB of video memory when the game starts, but
later when a lot of textures get loaded, it will increase to, say, 128
MB.

I guess you want to use the measure to select a suitable default
configuration for the game? (i.e. low resolution textures when there’s
not much memory around, high resolution textures if there is)
How will you then handle a situation where the user has allocated 256
MB of memory to his crappy and extremely slow on-board video adapter,
which would probably be slow even with low-resolution textures, and
obviously slow-to-the-extreme with the high-resolution ones?

As far as I know, the “how much video memory do I have?”-feature have
been left intentionally out of windows, just to prevent developers
from making false assumptions about the hardware :slight_smile: I don’t know if
it’s true though.

Another thing which comes to my mind… It’s apparantly getting
popular to use 3D hardware for OS window managers, and in the future
there’ll probably be a lot more concurrent applications to hog up the
video memory. I think it’s best to let the OS worry about managing the
video memory, as it will probably have the best view of things.

Just my 0.02 euros.–
Rasmus Neckelmann

not to mention that some (or most?) modern video cards use image compression (RLE i think although maybe there are other types too?) on the images stored in video memory so a 50k image may shrink down to 20k when it’s actually in RAM, which makes knowing how much vram there is a little more useless :stuck_out_tongue:

---- Rasmus Neckelmann wrote:> > > A game I’m working on wants to know how much video RAM is available to

the GL, so I wanted to hide the details inside SDL instead of link
against Xlib directly.

Isn’t it generally difficult to get accurate measures of how much
video memory is available? Especially with all those shared-memory
video adapters out there; some even converts more system memory to
video memory as it becomes needed on-the-fly. That is, you might
experience to have 16 MB of video memory when the game starts, but
later when a lot of textures get loaded, it will increase to, say, 128
MB.

I guess you want to use the measure to select a suitable default
configuration for the game? (i.e. low resolution textures when there’s
not much memory around, high resolution textures if there is)
How will you then handle a situation where the user has allocated 256
MB of memory to his crappy and extremely slow on-board video adapter,
which would probably be slow even with low-resolution textures, and
obviously slow-to-the-extreme with the high-resolution ones?

As far as I know, the “how much video memory do I have?”-feature have
been left intentionally out of windows, just to prevent developers
from making false assumptions about the hardware :slight_smile: I don’t know if
it’s true though.

Another thing which comes to my mind… It’s apparantly getting
popular to use 3D hardware for OS window managers, and in the future
there’ll probably be a lot more concurrent applications to hog up the
video memory. I think it’s best to let the OS worry about managing the
video memory, as it will probably have the best view of things.

Just my 0.02 euros.


Rasmus Neckelmann


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hello atrix2,

Tuesday, November 14, 2006, 5:10:09 PM, you wrote:

not to mention that some (or most?) modern video cards use image
compression (RLE i think although maybe there are other types too?)
on the images stored in video memory so a 50k image may shrink down
to 20k when it’s actually in RAM, which makes knowing how much vram
there is a little more useless :stuck_out_tongue:

That typically only happens if you request it though. Auto mipmap
generation is more likely to cause trouble.–
Best regards,
Peter mailto:@Peter_Mulholland

Hi Ryan,

I think there is a little mistake in the patch

+static inline int X11_GL_DetermineVideoRAM_ProcDRI(_THIS, int*
value)
+{

  • /* this is a DRI thing, so Linux and maybe FreeBSD can use it… */
  • const char *maxframebuffer_str = "max LFB = ";
  • const char *maxinvisible_str = "max Inv = ";
  • FILE *io = fopen("/proc/dri/0/umm", “r”);
  • *value = 0;
  • if (io == NULL) {

it should be

  • if (io != NULL) {

Jorge

Ryan C. Gordon wrote:

A game I’m working on wants to know how much video RAM is available to
the GL, so I wanted to hide the details inside SDL instead of link
against Xlib directly.

Attached is a patch to implement this as a query to
SDL_GL_GetAttribute(). Currently this is only implemented on X11, and
only in cases where we have a means to determine it (that’s with
Nvidia’s closed source drivers, or DRI on Linux, at the moment).

Mac OS X and Windows could implement this, too, in their own ways.

This isn’t a complete patch, just enough to give you an idea of what
I’m shooting for here.
It’s probably not a good idea to use nvctrl. It will lie to you on
turbocache chips, and report used system memory as if it was video
memory. This will actually make the problem worse when apps start
expeting more than what’s really available.

Stephane