I apologise in advance for the off-topic post, but I’m hoping that I’ll be
forgiven since it arises from my work with the SDL. If someone could point
me at a more appropriate forum, I’ll gladly relocate the question…
I understand the byte-ordering difference that defines Little- and Big-
Endian schemes. I’ve assumed from what I’ve seen in the SDL documentation
that when writing portable code one must cater for both platform-types.
Recent discussions that I’ve had with people in the office suggest that this
is not the case, and that the standard for C as a language is a Big-
Endian representation, i.e, that to get the least signifcant byte of a 32-bit
int, the mask 0x0000000f can be used REGARDLESS of whether the underlying
hardware is Little- or Big-Endian. The Assumption being that the compiler
will assumes that code is written in this way and will perform the necessary
byte-swapping dependant on the platform.
If this is correct, then why the need for different RGBA masks in the SDL for
BIG and LIL endian platforms? If anyone could point me at a reference, or
is willing to explain how this really works to me, I would greatly appreciate
the enlightenment. This is quite a big hole in my education that I’d like to
fill.
With thanks
.marc