Color key broken in glSDL?

I just spent absolutely ages trying to work out why my color key was
not working in glSDL, and eventually discovered that when creating an
SDL color with SDL_MapRGB the alpha component of the color is set to
0, whilst when glSDL is checking which pixels match the color key it
is checking pixels which can have an alpha component of 255.

My own little bit of code to make it work is below, though I don’t
normally faff with low level bit shifting so I won’t be surprised if
there is a mistake:

static bool rgb_match(Uint8* src, Uint32 color) {
Uint8 r, g, b;
if (SDL_BYTEORDER == SDL_BIG_ENDIAN) {
b = (color >> 16) & 0xff;
g = (color >> 8) & 0xff;
r = color & 0xff;
} else {
b = color & 0xff;
g = (color >> 8) & 0xff;
r = (color >> 16) & 0xff;
}

if (src[0] == b
&& src[1] == g
&& src[2] == r)
	return true;
return false;

}

static void key2alpha(SDL_Surface *surface) {
if(SDL_LockSurface(surface) < 0)
return;

Uint32 ckey = surface->format->colorkey;
int bytes_per_pixel = surface->format->BytesPerPixel;
Uint8* casted_buffer = reinterpret_cast<Uint8*>(surface->pixels);

for (int i = 0; i != surface->pitch * surface->h / bytes_per_pixel; ++i) {
	if (rgb_match(casted_buffer, surface->format->colorkey)) {
		casted_buffer[0] = 0;
		casted_buffer[1] = 0;
		casted_buffer[2] = 0;
		casted_buffer[3] = 0;
	}
	casted_buffer += bytes_per_pixel;
}

SDL_UnlockSurface(surface);

}

James

Why are you using colorkey on RGBA surfaces in the first place?
AFAICT, you can’t run into this problem without doing something
backwards. When you’re using RGB color values with RGBA pixels,
you’re basically comparing apples to pears.

glSDL will generate RGBA surfaces when you (gl)SDL_DisplayFormat()
colorkeyed surfaces, because that’s how it implements colorkeying
over OpenGL. (This is indeed a side effect that other backends don’t
have, AFAIK. They’re almost the other way around; they accelerate
colorkeying, but not alpha.)

If you mess with the colorkey settings of display format converted
surfaces, the display formatting is essentially undone, and glSDL
will have to do on-the-fly conversion for every blit afterwards. When
doing so, it’ll implement the new colorkey settings over the
proviously RGBA-converted surface by attempting to key it into a new
RGBA surface.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Friday 13 October 2006 12:27, James Gregory wrote:

I just spent absolutely ages trying to work out why my color key was
not working in glSDL, and eventually discovered that when creating
an SDL color with SDL_MapRGB the alpha component of the color is set
to 0, whilst when glSDL is checking which pixels match the color key
it is checking pixels which can have an alpha component of 255.

Why are you using colorkey on RGBA surfaces in the first place?
AFAICT, you can’t run into this problem without doing something
backwards. When you’re using RGB color values with RGBA pixels,
you’re basically comparing apples to pears.

glSDL will generate RGBA surfaces when you (gl)SDL_DisplayFormat()
colorkeyed surfaces, because that’s how it implements colorkeying
over OpenGL. (This is indeed a side effect that other backends don’t
have, AFAIK. They’re almost the other way around; they accelerate
colorkeying, but not alpha.)

If you mess with the colorkey settings of display format converted
surfaces, the display formatting is essentially undone, and glSDL
will have to do on-the-fly conversion for every blit afterwards. When
doing so, it’ll implement the new colorkey settings over the
proviously RGBA-converted surface by attempting to key it into a new
RGBA surface.

I ran into this problem doing this (pseudo code):

surface1 = load_image();
image_processing(surface1);
surface2 = create_rgba_surface();
blit(from surface1, to surface2);
set_color_key(surface2);
display_format(surface2);

I could remove the blitting across step, but then I would need to
reload the image each time I want to do some processing and create a
surface from it.

JamesOn 10/13/06, David Olofson wrote:

[…]

I ran into this problem doing this (pseudo code):

surface1 = load_image();
image_processing(surface1);
surface2 = create_rgba_surface();
blit(from surface1, to surface2);
set_color_key(surface2);
display_format(surface2);

I could remove the blitting across step, but then I would need to
reload the image each time I want to do some processing and create a
surface from it.

I see. But, why are you creating surface2 as an RGBA surface?

The correct way of doing it would be to create the surface2 using the
pixelformat struct from surface1. Unfortunately, that doesn’t quite
cover it as a general solution, as blitting from one surface to
another doesn’t create a copy of the original, except in a few
special cases. You need to fiddle with the colorkey and alpha
settings of the source surface to make sure the blitter actually just
copies pixels, without keying or blending.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Friday 13 October 2006 13:32, James Gregory wrote:

The game runs in RGBA, the image I was loading just happened to not
have an alpha channel.

JamesOn 10/13/06, David Olofson wrote:

On Friday 13 October 2006 13:32, James Gregory wrote:
[…]

I ran into this problem doing this (pseudo code):

surface1 = load_image();
image_processing(surface1);
surface2 = create_rgba_surface();
blit(from surface1, to surface2);
set_color_key(surface2);
display_format(surface2);

I could remove the blitting across step, but then I would need to
reload the image each time I want to do some processing and create a
surface from it.

I see. But, why are you creating surface2 as an RGBA surface?
The correct way of doing it would be to create the surface2 using the
pixelformat struct from surface1. Unfortunately, that doesn’t quite
cover it as a general solution, as blitting from one surface to
another doesn’t create a copy of the original, except in a few
special cases. You need to fiddle with the colorkey and alpha
settings of the source surface to make sure the blitter actually just
copies pixels, without keying or blending.

[…]

The game runs in RGBA, the image I was loading just happened to not
have an alpha channel.

That is, you’re using alpha blending with RGBA normally, but you want
colorkeying for this particular surface, becaues it doesn’t have an
alpha channel?

Proper solution:
Don’t use RGBA surfaces when dealing with colorkeying.

Easy way out:
Fire up GIMP, PhotoShop or whatever and convert the
offending image(s) to RGBA format.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Friday 13 October 2006 14:49, James Gregory wrote:

Yes, that is indeed my problem. I can see why it makes sense not to
support color keying for RGBA surfaces, though it is worth noting that
it works with normal SDL but not with glSDL.

On an entirely separate point, glSDL’s SetVideoMode leaks memory if
you call it more than once whilst an application is running. Most of
this memory leak is because calling SetVideoMode a second time
allocates fake_screen without freeing the old fake_screen first,
though there seems to be another leak of about 50kb that I can’t work
out.

JamesOn 10/13/06, David Olofson wrote:

On Friday 13 October 2006 14:49, James Gregory wrote:
[…]

The game runs in RGBA, the image I was loading just happened to not
have an alpha channel.

That is, you’re using alpha blending with RGBA normally, but you want
colorkeying for this particular surface, becaues it doesn’t have an
alpha channel?

Proper solution:
Don’t use RGBA surfaces when dealing with colorkeying.

Easy way out:
Fire up GIMP, PhotoShop or whatever and convert the
offending image(s) to RGBA format.

[…]

Yes, that is indeed my problem. I can see why it makes sense not to
support color keying for RGBA surfaces,

Actually, they’re two different features, and they are not inherently
mutually exclusive. However, it’s not obvious whether colorkeying
should consider the alpha channel or not.

though it is worth noting
that it works with normal SDL but not with glSDL.

Well, it appears that SDL ignores the alpha channel then. If that’s
correct, then glSDL is clearly doing the wrong thing here.

On an entirely separate point, glSDL’s SetVideoMode leaks memory if
you call it more than once whilst an application is running. Most of
this memory leak is because calling SetVideoMode a second time
allocates fake_screen without freeing the old fake_screen first,

Oops… Well, that should be a trivial fix. (Still not fixed in my
version, AFAICT.) Thanks for reporting it.

though there seems to be another leak of about 50kb that I can’t
work out.

Other surfaces or something?

There have been issues with glSDL_TexInfo structs and whatnot before,
and I’m not sure I’ve found all of them. Horrible mess trying to
manage these things in parallell with surfaces without any real
hooks, but that’s what you get with a thin wrapper like this.

Time for Valgrind, I think…

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Saturday 14 October 2006 22:21, James Gregory wrote:

[…]

though there seems to be another leak of about 50kb that I can’t
work out.

Here, I’m getting leaks of 38515 bytes total, with or without glSDL
compiled in. The leak remains even with just SDL_Init() and
SDL_Quit() (don’t even have to open a window), and goes away if I
remove those too, leaving a practically empty main() function.

When using glSDL with OpenGL rendering, the leaks are up to 76124
bytes total.

Most or all leaks seem to be directly related to X in one way or
another.

OS:	Linux 2.6.17, native 64 bit AMD64
X11:	Xorg 7.1 (70 101 000)
OpenGL:	2.0.2 (nVidia driver 1.0.8774)
SDL:	1.2.11
glSDL:	0.8 (work in progress)
gcc:	4.1.1

What’s going on here? Is it just Valgrind not getting along with X11,
or is this an actual problem with SDL?

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Saturday 14 October 2006 23:20, David Olofson wrote:

On an entirely separate point, glSDL’s SetVideoMode leaks memory if
you call it more than once whilst an application is running. Most of
this memory leak is because calling SetVideoMode a second time
allocates fake_screen without freeing the old fake_screen first,
though there seems to be another leak of about 50kb that I can’t work
out.

Also, in glSDL_BlitFromGL, if dst_rect is 0, dst_rect doesn’t get
given the width and height of the source rect automatically.

JamesOn 10/14/06, James Gregory <@James_Gregory> wrote:

[…]

Also, in glSDL_BlitFromGL, if dst_rect is 0, dst_rect doesn’t get
given the width and height of the source rect automatically.

I remember fixing something like this, but I’m not sure… This causes
incorrect rendering when leaving out the dst_rect, or worse?

Anyway, I’ll look into it.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 16 October 2006 11:30, James Gregory wrote:

[…later…]

I’m probably too tired for this now, but AFAICT, the destinaction
rectangle w and h fields are never used, as they’re passed on to an
SDL_BlitSurface() call, which just ignores them and then overwrites
them with the clipped results.

However, glSDL_BlitFromGL() doesn’t do any clipping of the source rect
to the screen, so Bad Things™ may well happen if you try to blit
from a completely or partially off-screen area. Oh, and it doesn’t
support hardware scaled displays either.

Basically, you’re not really supposed to blit from the screen when
using OpenGL. :slight_smile:

Even so, “bad” coordinates shouldn’t cause crashes, and it would be
pretty nice if you could make screenshots of hardware scaled
displays.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 16 October 2006 15:45, David Olofson wrote:

On Monday 16 October 2006 11:30, James Gregory wrote:
[…]

Also, in glSDL_BlitFromGL, if dst_rect is 0, dst_rect doesn’t get
given the width and height of the source rect automatically.

I remember fixing something like this, but I’m not sure… This
causes incorrect rendering when leaving out the dst_rect, or worse?

Anyway, I’ll look into it.

Most or all leaks seem to be directly related to X in one way or
another.

One fix:
http://bugzilla.libsdl.org/show_bug.cgi?id=345

Most of the rest is from XOpenIM(), which looks to be an x.org/XFree86
bug, or something under dlopen(), which is a glibc thing.

–ryan.