Understanding the pixel formats in SDL (ABGR or RGBA)

I tried to create a simple text texture with the format RGBA888:

Code:

SDL_Texture *tex;
tex = SDL_CreateTexture(rContext, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 2, 2);

And then, I tried to update it with the following pixel data:

Code:

char px[] =
{
255, 0, 0, 255,
0, 0, 255, 255,
0, 0, 255, 255,
0, 0, 255, 255
};

SDL_UpdateTexture(tex, NULL, px, 2 * sizeof(ILuint));

Problem is, SDL is reading the values as AGBR and not RGBA like I told it too.

What’s the problem here? Is it an endian matter? Also, if yes, why don’t I have this problem using libraries like DevIL for instance?

Thank you for all the help provided.

Code:

SDL_Texture *tex;
tex = SDL_CreateTexture(rContext, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STATIC, 2, 2);

I’m sorry, I meant:

Code:

SDL_Texture *tex;
tex = SDL_CreateTexture(rContext, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STATIC, 2, 2);

SDL_PIXELFORMAT_RGBA8888 takes the current endianess, if I recall
correctly (so indeed it’ll be ABGR… but AGBR, huh?).

Also err, are you sure this is correct?
2 * sizeof(ILuint)
(note: didn’t check the function documentation and I don’t remember
exactly what it wants right now, have to do other things right now -
that looks suspicious though)

2013/9/16, ShiroAisu :>

Code:

SDL_Texture *tex;
tex = SDL_CreateTexture(rContext, SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STATIC, 2, 2);

I’m sorry, I meant:

Code:

SDL_Texture *tex;
tex = SDL_CreateTexture(rContext, SDL_PIXELFORMAT_RGBA8888,
SDL_TEXTUREACCESS_STATIC, 2, 2);

Sik wrote:

(so indeed it’ll be ABGR… but AGBR, huh?).

I meant ABGR yes, sorry, I’m tired.

Sik wrote:

Also err, are you sure this is correct?
2 * sizeof(ILuint)

Yes, it is the size in bytes of the first row of pixels in the image, it is static like that just for testing purposes. I honestly find it kind of a strange argument tho, never have I had to pass anything like this before.

Sik wrote:

SDL_PIXELFORMAT_RGBA8888 takes the current endianess

So… wait… if that is the case, does it only respect little endian byte wise and not bit wise? If yes, why would it do that? I don’t get it.

Once again, that isn’t very common is it? I never had to take such a thing into account before.

2013/9/16, ShiroAisu :

So… wait… if that is the case, does it only respect little endian byte
wise and not bit wise? If yes, why would it do that? I don’t get it.

When you use that format you’re supposed to use 32-bit integers, not
bytes, so it takes whatever endianness the current CPU has.

Note that there are variants of that format that have a specific
endianness, so if you want to pass bytes, consider using those
instead.

Why does SDL take such a trivial approach to this? Endianness should have nothing to do with this really. If I tell it I have a pixel format of ARGB8888 its dead obvious where the bits are regardless of what CPU I’m using. It’s explicit. Why make it more complicated than it needs to be? This is somewhat of a rant and I wouldn’t mind hearing an answer on this. OpenGL did not do this and I do not recall Direct3D doing this either.> Date: Mon, 16 Sep 2013 21:01:46 -0300

From: sik.the.hedgehog at gmail.com
To: sdl at lists.libsdl.org
Subject: Re: [SDL] Understanding the pixel formats in SDL (ABGR or RGBA).

2013/9/16, ShiroAisu :

So… wait… if that is the case, does it only respect little endian byte
wise and not bit wise? If yes, why would it do that? I don’t get it.

When you use that format you’re supposed to use 32-bit integers, not
bytes, so it takes whatever endianness the current CPU has.

Note that there are variants of that format that have a specific
endianness, so if you want to pass bytes, consider using those
instead.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

ARGB8888 means that the color is encoded as 0xAARRGGBB or
0bAAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB, i.e. alpha in the most significant 8
bits, red in the next most significant 8 bits, and so on. To see why
this is the only correct interpretation, consider any pixel format that
doesn’t allocate exactly 8 bits - for example, RGB565 is correctly
stored as 0bRRRRRGGGGGGBBBBB, not as the byte-swapped 0bGGGBBBBBRRRRRGGG.On 17.09.2013 14:44, John Tullos wrote:

Why does SDL take such a trivial approach to this? Endianness should
have nothing to do with this really. If I tell it I have a pixel
format of ARGB8888 its dead obvious where the bits are regardless of
what CPU I’m using. It’s explicit. Why make it more complicated than
it needs to be? This is somewhat of a rant and I wouldn’t mind
hearing an answer on this. OpenGL did not do this and I do not recall
Direct3D doing this either.


Rainer Deyke (rainerd at eldwood.com)

No, it isn’t obvious. Hardware sees each pixel as a word (in this case
32-bit words), so it has endianness issues, and software tends to
follow along, especially for performance reasons.

SDL lets you take either approach. You can treat pixels as words and
take the platform-dependent type (which changes endianness depending
on the system), or you can treat them as bytes and take the
platform-independent types. Since the only thing you really need to do
is choose the appropriate format, this really isn’t as complex as it
sounds (moreover, the platform-dependent types are just aliases to the
platform-independent ones).

2013/9/17, John Tullos :> Why does SDL take such a trivial approach to this? Endianness should have

nothing to do with this really. If I tell it I have a pixel format of
ARGB8888 its dead obvious where the bits are regardless of what CPU I’m
using. It’s explicit. Why make it more complicated than it needs to be? This
is somewhat of a rant and I wouldn’t mind hearing an answer on this. OpenGL
did not do this and I do not recall Direct3D doing this either.

Date: Mon, 16 Sep 2013 21:01:46 -0300
From: @Sik_the_hedgehog
To: sdl at lists.libsdl.org
Subject: Re: [SDL] Understanding the pixel formats in SDL (ABGR or RGBA).

2013/9/16, ShiroAisu :

So… wait… if that is the case, does it only respect little endian
byte
wise and not bit wise? If yes, why would it do that? I don’t get it.

When you use that format you’re supposed to use 32-bit integers, not
bytes, so it takes whatever endianness the current CPU has.

Note that there are variants of that format that have a specific
endianness, so if you want to pass bytes, consider using those
instead.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sik wrote:

Hardware sees each pixel as a word (in this case
32-bit words), so it has endianness issues, and software tends to
follow along, especially for performance reasons.

I still have one question tho, if it is supposed to read a full 32 bit string, they why does it only switch the order of the individual components and not of the entire string?

For instance, if I have 2 components RG, why does:

01000000 10000000

become

10000000 01000000

and not

00000001 00000010

?

Sik wrote:

SDL lets you take either approach. You can treat pixels as words and
take the platform-dependent type (which changes endianness depending
on the system), or you can treat them as bytes and take the
platform-independent types.

How would I treat them as bytes then? What format would I use for that?

The documentation is really lacking IMO.

2013/9/17 ShiroAisu

**

Sik wrote:

Hardware sees each pixel as a word (in this case
32-bit words), so it has endianness issues, and software tends to
follow along, especially for performance reasons.

I still have one question tho, if it is supposed to read a full 32 bit
string, they why does it only switch the order of the individual components
and not of the entire string?

For instance, if I have 2 components RG, why does:

01000000 10000000

become

10000000 01000000

and not

00000001 00000010

?

Because there would be no point, it would completely change the numerical
value.
Little endianness is not just some “hardware quirk” it is designed for
specific purposes,
such as eg. being able to address values < 256 stored in a 32bit int
through an 8bit
char pointer.
The endianness on the bit level is completely transparent to the user, as
the octet is
the smallest addressable data unit on current CPUs.

Jonas

2013/9/17, ShiroAisu :

I still have one question tho, if it is supposed to read a full 32 bit
string, they why does it only switch the order of the individual components
and not of the entire string?

For instance, if I have 2 components RG, why does:

01000000 10000000

become

10000000 01000000

and not

00000001 00000010

?

Because endianness works at the byte level, not at the bit level.

This is a legacy we inherited from old 8-bit processors which were all
little endian (which was easier to implement), and the most successful
processor series started as one (8088 had an 8-bit bus) ._.'
Personally I’d prefer if we stuck to big endian, but it’s too late to
change that.

How would I treat them as bytes then? What format would I use for that?

The documentation is really lacking IMO.

looks at wiki WTF, the processor-independent types used to be
specified there, I swear, what happened to them? o_o looks in the
headers
Ugh, looks like they got rid of that -_-’ Guess it has been
quite a while since I looked at this (like, when I had started working
on my game).

It seems now everything is little endian now (for whatever reason).

Ugh, looks like they got rid of that

How would I work around this then?

I have a series of bytes representing each pixel component, do I have to do everything differently for each system?

No, it seems SDL now takes little endian always (i.e. lowest byte first).

If somebody spots an error tell me, going by what I’ve seen in the header.

2013/9/17, ShiroAisu :>

Ugh, looks like they got rid of that

How would I work around this then?

I have a series of bytes representing each pixel component, do I have to do
everything differently for each system?

Sik wrote:

No, it seems SDL now takes little endian always

Correct me if I’m wrong, but does that mean that I dont need to work around it? Will it be little endian on every system?

Yeah, that seems to be the case if I understood the headers correctly.
Just so you know: this means the bytes are in reverse order (i.e. RGBA
has alpha first and red last), so you’ll probably still need to deal
with that.

2013/9/18, ShiroAisu :>

Sik wrote:

No, it seems SDL now takes little endian always

Correct me if I’m wrong, but does that mean that I dont need to work around
it? Will it be little endian on every system?

Sik wrote:

Yeah, that seems to be the case

Alright, Ill just use ABGR when I want RGBA then, if anyone has any observations it would be appreciated, but for now thank you very much for your help.