as you know, QueryTexture returns the pixelformat that surface’s being copied to the texture should be.
Unfortunately it doesn’t give a clue as to what the pixelformat of the texture itself is.
Up until now I’ve been creating an RGBA surface (little/big-endian accounted for), then using CreateTextureFromSurface to create my surfaces - which is fine but very slow for large empty textures. It is currently the only way I know of to get the renderer to use it’s most-preferred RGBA format for the texture.
If QueryTexture returned the pixelformat of the texture itself as well as the desired transfer surface pixelformat, I could grab the pixelformat from that and use it for all subsequent textures.
Does anybody know of a moethod to determine the renderer’s default preferred 32-bit pixelformat?