What I’d really like to see is a graphics backend that didn’t delete
all your textures in the first place just because you switched to
fullscreen mode or minimized the window or something.
Well, yeah, that is what I would like too. I’m not likely to ever see it though
never seen the point of trashing that much memory, and then
making the program waste time recreating an exact copy of
it.? (It’s not like you’re going to use different textures for
fullscreen mode, afterall!)
Itis a resource allocation problem. You have to deal with video memory
fragmentation and you have to allocate resources to the application
that can make the best use of them.
Think back to the bad old days when a couple of megabytes was a huge
amount of video memory. Now think back to when 65 KILObytes was a huge
amount of memory and you are back to the time when GL was being
invented. OpenGL came along later but it had all the baggage left over
from the way back bad old days. (OpenGL was created from GL mostly to
keep the world from moving to PEX because PEX was open and GL was not.
Ever hear of PEX? Hey, it worked!)
Ok, so back when memory was small and expensive you had to make the
most of what you had. Not to mention that back in those days a
microprocessor that could do a million instructions per second was
still science fiction. Memory was expensive and cycles were expensive.
When you set the video mode some amount of video memory is used up by
the display buffers. If you are allocating memory you have to have a
strategy for doing it. The easiest way to allocate it is to start at
address 0 and work your way through memory. It was never that easy,
because graphic buffers are rectangles, not just long strings of
bytes, but that was the idea. (On some systems you had to do 2
dimensional allocation because the video hardware had a fixed stride.)
Your display buffer (or buffers) usually took up most of the memory.
When you added textures , extra buffers, display lists, you just
filled them in to the memory following the video buffers. You just
fill memory starting at the bottom and working toward the top. Now,
you change the video mode so that you need bigger display buffers.
Where do you find the memory for them?
You can start by compacting memory and then dropping items one at a
time all the while sending a stream of information about what has been
dropped back to the programmer and eventually get enough free space
for the new buffers. Or, you can just dump everything out of memory
and start over. That second approach doesn’t need to give any feedback
to the programmer because everything always gets trashed after certain
function calls. Code space and cycles were also very expensive so a
solution that doesn’t require any memory or code is a real winner. Not
to mention that if one application goes full screen, why shouldn’t it
get all the video memory too? No other application can use it because
they are not visible.
That is the simple version of the problem it was actually a lot more
complex than that.
Now days it is not as much of a problem because the size of graphics
memory has grown and the way it is accessed by both rendering and
display hardware has changed. But, even now, the graphics memory can
wind up fragmented to the point where there are no free blocks large
enough to hold the new display buffers. And, it is still the case that
if another application goes full screen there is no good reason not to
give all the video memory to the only application that can be seen.
Does anyone know why it does it that way? Because I’ve
never understood it.
Hope I’ve helped, I actually worked on memory allocators for graphics
hardware back in the bad old days.
Bob PendletonOn Mon, Nov 23, 2009 at 3:44 PM, Mason Wheeler wrote:
From: Bob Pendleton <@Bob_Pendleton>
Subject: Re: [SDL] New SDL 1.3 event type proposal for discussion
What I’d really like to see is a graphics API that does lazy resource
loading that uses this event to mark all resources as unloaded.
Something like that would help avoid the problem of suddRemember that cenly having
the application freeze while it madly uploads a few hundred megabytes
of textures to the video card. If we had that then we wouldn’t
need to expose the event to programmers.
SDL mailing list
SDL at lists.libsdl.org