[ Lots of rambling here, sorry. I wrote and debugged at the same time. ]
[ Skip to the end for a possible bug description. ]
In loki.open-source.sdl, you wrote:
Okay, turns out that wasn’t the problem in the end, the problem was that
the destination surface now has SDL_SRCALPHA set where previously it
didn’t. I’m not sure I understand the reasoning, but hey, problem solved.oh no, don’t think you’ll get away that easily
SDL_SRCALPHA should have no bearing on the result when set on the
destination surface, or there is a bug in SDL. Did you try my patch,
or using DisplayFormatAlpha?
Actually because I cheated to start with and didn’t tell the whole story.
The texture in question is a cockpit for a flight sim game I’m working
on (yeah, I know, I know, everyone has a flight sim now, but it’s fun
and at least I’m not trying to hype it as something useable).
The textures (for backwards compatibility with an old flight sim engine)
are palettized PCX files, where color index 0 is the color key (i.e.,
transparent).
The cockpit init loads the PCX files as I specified in my original
code, using the SDL_BlitSurface alpha-fill/copy hack to set color 0
pixels to alpha=0. After this it is split into sub-images with ^2
sizes for loading into opengl textures. It was at this stage
that the problem occurred: the original load ended up with images
with SDL_SRCALPHA set, so the blits to make the sub-tiles weren’t
copying anything somehow.
A code fragment that would show the problem is (with suitable gdb dumps
of surface formats and comments):
#if SDL_BYTEORDER == SDL_LIL_ENDIAN
static const Uint32 rmask = 0x000000FF;
static const Uint32 bmask = 0x0000FF00;
static const Uint32 gmask = 0x00FF0000;
static const Uint32 amask = 0xFF000000;
#else
static const Uint32 rmask = 0xFF000000;
static const Uint32 bmask = 0x00FF0000;
static const Uint32 gmask = 0x0000FF00;
static const Uint32 amask = 0x000000FF;
#endif
SDL_Surface *orig, *rgba, *tile;
SDL_Rect src;
orig = IMG_Load("FILENAME.PCX");
/*****************************************
(gdb) print *orig
$3 = {flags = 0x0, format = 0x9a678d8, w = 640, h = 480, pitch = 640,
pixels = 0x49c17008, offset = 0, hwdata = 0x0,
clip_rect = {x = 0, y = 0, w = 640, h = 480}, unused1 = 0xffffffff,
locked = 0, map = 0x8ed99b8, format_version = 0, refcount = 1}
(gdb) print *orig->format
$4 = {palette = 0x8ecd878, BitsPerPixel = 8 '\b',
BytesPerPixel = 1 '\001', Rloss = 8 '\b', Gloss = 8 '\b',
Bloss = 8 '\b', Aloss = 8 '\b', Rshift = 0 '\000', Gshift = 0 '\000',
Bshift = 0 '\000', Ashift = 0 '\000', Rmask = 0, Gmask = 0, Bmask = 0,
Amask = 0, colorkey = 0, alpha = 255 '?'}
******************************************/
rgba = SDL_CreateRGBSurface(SDL_SWSURFACE, orig->w, orig->h, 32,
rmask, bmask, gmask, amask);
/* Palettized images have color index 0 transparent. We set the
* color key (transparent color) here, and fill the alpha channel
* in the dest with 0. When we do the blit the color key pixels
* aren't copied and retain the 0 alpha value. */
SDL_SetColorKey(orig, SDL_SRCCOLORKEY, 0);
SDL_FillRect(rgba, NULL, SDL_MapRGBA(rgba->format, 0, 0, 0, 0));
SDL_BlitSurface(orig, NULL, rgba, NULL);
/*****************************************
(gdb) print *rgba
$5 = {flags = 0x10000, format = 0x9a67d50, w = 640, h = 480,
pitch = 2560, pixels = 0x4a613008, offset = 0, hwdata = 0x0,
clip_rect = {x = 0, y = 0, w = 640, h = 480}, unused1 = 0xffffffff,
locked = 0, map = 0x8ed99e0, format_version = 0, refcount = 1}
(gdb) print *rgba->format
$6 = {palette = 0x0, BitsPerPixel = 32 ' ', BytesPerPixel = 4 '\004',
Rloss = 0 '\000', Gloss = 0 '\000', Bloss = 0 '\000', Aloss = 0 '\000',
Rshift = 0 '\000', Gshift = 8 '\b', Bshift = 16 '\020',
Ashift = 24 '\030', Rmask = 0xff, Gmask = 0xff00, Bmask = 0xff0000,
Amask = 0xff000000, colorkey = 0, alpha = 255 '?'}
******************************************/
tile = SDL_CreateRGBSurface(SDL_SWSURFACE, 256, 256,
32, rmask, bmask, gmask, amask);
src.x = 0; src.y = 0; src.w = 256; src.h = 256;
SDL_BlitSurface(rgba, &src, subimg, NULL);
After this final blit the “subimg” surface is either all black or all
alpha=0 (I’m not 100% sure which). The SDL_SetAlpha doc says that in
this case:
The source is alpha-blended with the destination using the source
alpha channel. The alpha channel in the destination surface is
left untouched.
So I assume that the alpha=0 for every pixel is remaining in the
destination after the blit (I’m having trouble imagining a scenario where
this would be helpful, but I’m probably not thinking hard enough).
If I add:
rgba->flags =& ~SDL_SRCALPHA;
just before the final blit, all is fine.
Phew, that was a long message to satisfy curiousity. I guess the
difference I’m noticing is that SDL_SRCALPHA is ON by default now when
using SDL_CreateRGBSurface, whereas previously it was OFF unless I
specified it in the flags argument. Does that sound right?
These are the RPMS of SDL 1.2. I haven’t tried the CVS version recently.
For reference, here is a small test-case that illustrates the “problem”:
#include <stdio.h>
#include <SDL/SDL.h>
static const Uint32 rm = 0x000000FF; static const Uint32 bm = 0x0000FF00;
static const Uint32 gm = 0x00FF0000; static const Uint32 am = 0xFF000000;
int
main(int argc, const char *argv[])
{
SDL_Surface *s;
SDL_Init(SDL_INIT_VIDEO);
s = SDL_CreateRGBSurface(SDL_SWSURFACE, 100, 100, 32, rm, bm, gm, am);
fprintf(stdout, "Surface flags were: %x\n", s->flags);
SDL_FreeSurface(s);
SDL_Quit();
return 0;
}
I would have expected this to show 0x0, but it shows 0x10000 (SDL_SRCALPHA).
- Mike