Endian problem under OS X?

Hi, I’m porting some games to Mac OS X, and now I think I’ve found a
little bug in SDL for OSX.
I have this code to select whether the MASKS should be bigendian or
little endian:

if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {

} else {

}

It happens that in Mac OS X, SDL_BYTEORDER = SDL_BIG_ENDIAN, and shouldn’t!!
To make my program work properly, I’ve to force entering in the "else"
part of the code, this way:

// if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {
// …
// } else {

// }

Anyone has noticed the same problem?

Santi Onta??n wrote:

Hi, I’m porting some games to Mac OS X, and now I think I’ve found a
little bug in SDL for OSX.
I have this code to select whether the MASKS should be bigendian or
little endian:

if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {

} else {

}

It happens that in Mac OS X, SDL_BYTEORDER = SDL_BIG_ENDIAN, and
shouldn’t!!
To make my program work properly, I’ve to force entering in the "else"
part of the code, this way:

// if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {
// …
// } else {

// }

Anyone has noticed the same problem?


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

OS X runs on a PowerPC which is Big Endian, and thus SDL_BYTEORDER
should be SDL_BIG_ENDIAN, am I missing something?

/Magnus

It happens that in Mac OS X, SDL_BYTEORDER = SDL_BIG_ENDIAN, and
shouldn’t!!

I don’t know MacOS X, but I know that x86 is LITTLE endian and about
anything else (PPC, SPARC, 680x0, Mips…) is BIG endian.

IMHO anyway this kind of definition can be misunderstood easily, I think
that naming BIG ENDIAN something like “natural byteorder” and LITTLE
endian something like “inversed byteorder” whould be less confusing :slight_smile:

Bye,
Gabry

I’m not sure if the G4 is Big endian or Little endian. I will explain my
problem better:

I’ve this function (that multiplies the RED, GREEN and BLUE channels of
a Surface by a constant), that I use to fade in and out, and to increase
or decrease the RED, GREEN or BLUE level of an image:

void surface_fader(SDL_Surface *surface,float r_factor,float
g_factor,float b_factor)
{
int i,x,y,offs;
Uint8 rtable[256],gtable[256],btable[256];
Uint8 *pixels = (Uint8 *)(surface->pixels);

if (surface->format->BytesPerPixel!=4) return;

for(i=0;i<256;i++) {
	rtable[i]=(Uint8)(i*r_factor);
	gtable[i]=(Uint8)(i*g_factor);
	btable[i]=(Uint8)(i*b_factor);
} /* for */ 

for(y=0;y<surface->h;y++) {
	for(x=0,offs=y*surface->pitch;x<surface->w;x++,offs+=4) {
                   if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {
                        pixels[offs]=rtable[pixels[offs]];
      	              pixels[offs+1]=gtable[pixels[offs+1]];
  	    	              pixels[offs+2]=btable[pixels[offs+2]];
		} else {
                        pixels[offs+3]=rtable[pixels[offs+3]];
      	              pixels[offs+2]=gtable[pixels[offs+2]];
  	    	              pixels[offs+1]=btable[pixels[offs+1]];
		} /* if */ 
	} /* for */ 

} /* for */ 

} /* surface_fader */

And I’ve got to replace it with:

void surface_fader(SDL_Surface *surface,float r_factor,float
g_factor,float b_factor)
{
int i,x,y,offs;
Uint8 rtable[256],gtable[256],btable[256];
Uint8 *pixels = (Uint8 *)(surface->pixels);

if (surface->format->BytesPerPixel!=4) return;

for(i=0;i<256;i++) {
	rtable[i]=(Uint8)(i*r_factor);
	gtable[i]=(Uint8)(i*g_factor);
	btable[i]=(Uint8)(i*b_factor);
} /* for */ 

for(y=0;y<surface->h;y++) {
	for(x=0,offs=y*surface->pitch;x<surface->w;x++,offs+=4) {
                    pixels[offs+3]=rtable[pixels[offs+3]];
                    pixels[offs+2]=gtable[pixels[offs+2]];
                    pixels[offs+1]=btable[pixels[offs+1]];
	} /* for */ 

} /* for */ 

} /* surface_fader */

The first version works fine in WINDOWS, and I don’t know why it doen’t
work in Mac. am I missing something?

This isn’t an endian issue, but rather a pixel format issue. The screen
format on OS X for a 32bit screen is ARGB, not RGBA as it might be in
windows. To write code that works no matter what the pixel format is,
you’ll have to use shifts and masks (store 0xFF in the alpha channel,
or just leave it alone). To get the mask and shift values, use the data
from the surface’s pixel format. Process one 32-bit pixel at a time by
applying the mask and shift, multiplying, then reapplying the mask and
shift (in reverse). For example, to change the red component:

Uint32 pixel = pixels[i];
Uint8 red = ((pixel & rmask) >> rshift);
pixels[i] = (pixel & ~rmask) | (((red * factor) << rshift) & rmask);

you hit the spot!

Thanks. That was the problem.

Hi,

It happens that in Mac OS X, SDL_BYTEORDER = SDL_BIG_ENDIAN, and
shouldn’t!!

Unless you’re amazingly not running under a PowerPC processor, yes it should.

Neil

Hi,

It happens that in Mac OS X, SDL_BYTEORDER =3D SDL_BIG_ENDIAN,=
and
shouldn’t!!

Unless you’re amazingly not running under a PowerPC processor,=
yes it should.

Neil

Neil,

As you probably know, PowerPC can run both little endian and big
endian - so the answer should perhaps have been “what hardware
platform?”, “what processor?” (Maybe he has one of the mythical Mac
OS X Athlon boxes?) :slight_smile:

That said, I agree, since Mac OS X normally runs on PowerPC Mac’s,
which normally are big endian (except when running VPC), then it
should.

At Wed, 18 Dec 2002, Gabriele Greco wrote:

IMHO anyway this kind of definition can be misunderstood easily, I think
that naming BIG ENDIAN something like “natural byteorder” and LITTLE
endian something like “inversed byteorder” whould be less confusing :slight_smile:

Gabriele,

You have my vote.

However, I’m not sure that 95% of other computer users would agree. :slight_smile:

Seasons Greetings,
Rob Probin
Lightsoft
http://www.lightsoft.co.uk