Managing byte-swapped surfaces?

“Patrice Mandin” wrote:

I did not find any information in SDL_PixelFormat or SDL_Surface about
the byte order in the surfaces. On Atari, which has a m68k CPU
(big-endian), custom video cards are seen as little-endian. So if a
program wants to write a 16/24/32 bits pixel, it do not know if it must
byte-swap it or not (same for reading).

SDL always keeps pixel values in the native byte order in surfaces. Adding
a byte order field in the format would complicate the code for rare gains.
For 8, 24 and 32bpp this is not a problem (it’s only a matter of adjusting
the RGBAmasks) but for 16bpp you’re screwed

I’ve been semi-worried about this problem for more modern hardware,
mainly Mac video boards hanging on a little-endian PCI bus, but Mac people
tell me there is some hardware magic being done so that the user sees
a native-endian framebuffer (I’d like to have this confirmed).

Anyway, I believe byte-swapped 16bpp framebuffers should be considered as
“odd and unsupported pixel format” – sorry. I might consider adding
an API for testing whether a surface is byte-swapped, but I’m not fond
of adding an extra optional byteswap to each of the blitter functions.

I still have an atari ST stowed away somewhere and while it’s got its
charm, TOS is one of the absolutely worst excuses for an operating
system I’ve ever encountered, so you have my sympathies