SDL_PIXELFORMAT_RGBA32 is an alias for for SDL_PIXELFORMAT_RGBA8888 on big endian machines and for SDL_PIXELFORMAT_ABGR8888 on little endian machines
Ok.
But it worked counter-intuitively for me, and I’m kind of confused.
I made an array of 32-bit values (pixelformats) formatted as RGBA, written as 0x01020304
And put them in a texture with the SDL_PIXELFORMAT_RGBA32 flag.
The result was that the texture was displayed as ABGR (because my computer is little endian, like everyone elses)
The solution was to just change the pixelformat to SDL_PIXELFORMAT_RGBA8888
But I’m still confused why in my scenario it didn’t work, and I’m curious what is the scenario where this works properly. When I manually write a hex number, it is logical for the computer to store it in memory in reverse (little endian). This means the flags don’t take the value byte by byte from memory, but as a 32 bit chunk, interpreted into the normal (big-endian) value.
So in what scenario do bytes come one by one where they need to be reversed and where SDL_PIXELFORMAT_RGBA32 makes sense?
In short, am I ok with my current setting of SDL_PIXELFORMAT_RGBA8888, and RGBA ordered hardcoded 32-bit hex values like 0x01020304 ? Will this work properly on big endian as well?