SDL_Texture, main-memory usage?

I’ve just been building some apps in SDL2 and am surprised to find that even when hardware acceleration is available, creating a texture still uses main memory as well, as reported by Windows Task Manager.
For example, this:
SDL_Texture *texturer = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 8192,8192);

makes the app use ~226000kb memory. While this:
SDL_Texture *texturer = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 4096, 4096);

makes the app use about 44000kb.

Is this the same for everyone, and if so, why?
I’m using an HD5800 1GB on x86 Winxp OS (4GB, 3.6GB addressable), so I wondered if the graphics driver is sharing memory addressing space with the OS. Otherwise I can’t account for this.

mattbentley wrote:

I’ve just been building some apps in SDL2 and am surprised to find that even when hardware acceleration is available, creating a texture still uses main memory as well, as reported by Windows Task Manager.
For example, this:
SDL_Texture *texturer = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 8192,8192);

makes the app use ~226000kb memory. While this:
SDL_Texture *texturer = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 4096, 4096);

makes the app use about 44000kb.

Is this the same for everyone, and if so, why?
I’m using an HD5800 1GB on x86 Winxp OS (4GB, 3.6GB addressable), so I wondered if the graphics driver is sharing memory addressing space with the OS. Otherwise I can’t account for this.

ps. SDL Render reports using direct3d, max texture width/height = 8192/8192.

As far as I know this is very much to be expected.
Direct3D will keep textures resident in host memory so that it can swap
them to and from the graphics card as needed. Its a process similar to
how virtual memory works on the PC, which will write less used regions
of memory out to the hard disk to make space for new memory requests.
This process is there to free up the programmer so they can use more
resources then the graphics card may have available, and offload the
memory management issues to the graphics card drivers.
I’m not an expert on these things, but I believe this is the reason.On 23/11/2014 08:46, mattbentley wrote:

I’ve just been building some apps in SDL2 and am surprised to find
that even when hardware acceleration is available, creating a texture
still uses main memory as well, as reported by Windows Task Manager.
For example, this:
SDL_Texture *texturer = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 8192,8192);

makes the app use ~226000kb memory. While this:
SDL_Texture *texturer = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 4096, 4096);

makes the app use about 44000kb.

Is this the same for everyone, and if so, why?
I’m using an HD5800 1GB on x86 Winxp OS (4GB, 3.6GB addressable), so I
wondered if the graphics driver is sharing memory addressing space
with the OS. Otherwise I can’t account for this.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Aidan Dodds wrote:

As far as I know this is very much to be expected.
Direct3D will keep textures resident in host memory so that it can swap them to and from the graphics card as needed. Its a process similar to how virtual memory works on the PC, which will write less used regions of memory out to the hard disk to make space for new memory requests. This process is there to free up the programmer so they can use more resources then the graphics card may have available, and offload the memory management issues to the graphics card drivers.
I’m not an expert on these things, but I believe this is the reason.

That seems counterintuitive and probably wrong, since that would double the amount of memory address space used by a graphics intensive application by a game, meaning that a game using 750MB of video memory would use 750MB of main memory, resulting in only 2.5GB of memory address space being available in a 32-bit app.
Does anyone else have any input?

2014-11-27 10:47 GMT+01:00 mattbentley :

Aidan Dodds wrote:

As far as I know this is very much to be expected.
Direct3D will keep textures resident in host memory so that it can swap
them to and from the graphics card as needed. Its a process similar to how
virtual memory works on the PC, which will write less used regions of
memory out to the hard disk to make space for new memory requests. This
process is there to free up the programmer so they can use more resources
then the graphics card may have available, and offload the memory
management issues to the graphics card drivers.
I’m not an expert on these things, but I believe this is the reason.

That seems counterintuitive and probably wrong, since that would double
the amount of memory address space used by a graphics intensive application
by a game, meaning that a game using 750MB of video memory would use 750MB
of main memory, resulting in only 2.5GB of memory address space being
available in a 32-bit app.
Does anyone else have any input?

http://msdn.microsoft.com/en-us/library/windows/desktop/bb147168(v=vs.85).aspx

Memory usage depends on what pool is chosen for the resource.

Jonas Kulla wrote:

http://msdn.microsoft.com/en-us/library/windows/desktop/bb147168(v=vs.85).aspx (http://msdn.microsoft.com/en-us/library/windows/desktop/bb147168(v=vs.85).aspx)

Memory usage depends on what pool is chosen for the resource.

meaning that all default textures get allocated to both video memory AND system memory - the apparent reason being that if the application is switched out of (this is what is meant by ‘lost state’ in the above Microsoft article, as is described here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174714(v=vs.85).aspx) the pixel content of a non-managed texture would be potentially lost.

However the downside is main memory usage, which I guess the bigger players in the games world get around by reloading textures after an application regains focus?
Anyway.
Not a problem unless of course you’ve got large texture maps and high memory usage in other areas.
So you were right aidan.
Thanks-
Matt>From my reading of this and reading the SDL code in SDL_Render_D3D.c, looks like SDL allocates to d3d_managed unless the texture is a streaming texture or a render target,