Hello !
@ Sam, Ryan :
I would do it that way, if there are
any technical reasons against it, i would
like to hear about them :
SDL_Surface
will be basically what it was in 1.2,
only a few additions like a pointer to a SDL_Texture
or an Array of SDL_Textures (maybe for MIP Mapping ?)
a flag if an associated texture(pack) should be used when
blitting or if the SDL_Surface should be used.
SDL_Texture
A real texture on the GFX card.
With 2D Drivers like X11, GDI … the additional things like
use the texture for blitting and SDL_CreateTexture, SDL_RemoveTexture
and SDL_UpdateTexture are dummys, only with OpenGL and Direct3D they
have real implementations.
A SW_Surface is what it was in 1.2
a chunk of main memory.
A HW_Surface is a SDL_Surface with an associated SDL_Texture
and with the flag true for using the texture when blitting.
When you call SDL_LockSurface and then SDL_UnlockSurface
the SDL_Surface is converted to a SDL_Texture or the SDL_Texture
is updated, if there already was one that SDL_Surface texture pointer
points, too.
That way when using old apps with SDL_HWSurface can get the speed,
they always wanted to have to. I mean you only used SDL_HWSurfaces
when you to get the maximum blitting speed.
In the 2D case you can all do this, too. As the functions like
SDL_CreateTexture
SDL_RemoveTexture … are dummys. Only when using OpenGL or D3D, they have
real implementations.
So when using SDL_HWSurfaces you automatically get the best
speed that is possible with your driver. When using SDL_SWSurfaces
you have to set the use_3d flag true and manually Create, Update, Remove
the SDL_Texture.
When doing a blit SDL in the 2D case simply blits the SDL_Surface
like in 1.2 and in the 3D it can ask okay is the flag
true then use the texture for blitting or if not use the SDL_Surface
for blitting.
That gives the user the choice.
CU