Let’s assume I have an array of bytes with pixel data, like “unsigned
char data*;”, e.g. from stb_image’s stbi_load().
The first byte is for red, the second for blue, the third for green, the
fourth for alpha.
I think it’s sane to call this RGBA, right?
So I wanna create a SDL_Surface* with this, e.g. to set the window icon.
Uint32 rmask, gmask, bmask, amask;
&bpp, &rmask, &gmask, &bmask, &amask);
SDL_Surface* surf = SDL_CreateRGBSurfaceFrom((void*)data, w, h, 48,
4w, rmask, gmask, bmask, amask);
(Then I display that surface).
On a little endian machine, this looks wrong, because the rmask is
0xff000000, which masks the last (fourth) byte instead of the first one
(similar for other masks).
Using SDL_PIXELFORMAT_ABGR8888 looks correct, but will (most probably)
look wrong on big endian machines…
Furthermore, using it seems just wrong when I really have RGBA data.
Anyway, I find it kinda surprising at first that the data passed to
SDL_CreateRGBSurfaceFrom() seem to be interpreted as 32bit ints (and
with 16bit or 24bit colordepth this is even stranger), especially as
it’s a void* pointer and not a Uint32* pointer. (Only at first because
something like this must be done internally, otherwise the masks
wouldn’t make sense)
And then it’s even more surprising that the masks generated by
SDL_PixelFormatEnumToMasks() with SDL_PIXELFORMAT_RGB888 and
SDL_PIXELFORMAT_RGBA8888 don’t seem to work correctly with bytestreams -
the very name “RGBA8888” (in contrast to the nonexistant “RGBA32”)
sounds like “there’s 8bits/1byte of red, then 1byte of green etc”, not
"we really assume a 32bit integer value".
So is there a portable way to set the masks in the (platform-specific)
correct way for bytestreams?
I’d even assume that this is a more common usecase than transforming an
array of 32bit ints with red in the least significant byte (instead of