Hello,
I was wondering about the correct behavior of SDL_CreateRGBSurfaceFrom().
Here’s a code example:
Code:
SDL_Surface * dup (SDL_Surface *img)
{
Uint8 *pixels;
SDL_Surface *ret;
if (SDL_MUSTLOCK(img))
SDL_LockSurface(img);
pixels = malloc(img->h * img->pitch * img->format->BytesPerPixel);
memcpy(pixels, img->pixels, img->h * img->pitch * img->format->BytesPerPixel);
if (SDL_MUSTLOCK(img))
SDL_UnlockSurface(img);
ret = SDL_CreateRGBSurfaceFrom(pixels, img->w, img->h, img->format->BytesPerPixel, img->pitch,
img->format->Rmask, img->format->Gmask, img->format->Bmask, img->format->Amask);
return ret;
}
int main (void)
{
SDL_Surface *screen, *img1, *img2;
SDL_Init(SDL_INIT_VIDEO);
atexit(SDL_Quit);
img1 = IMG_Load("test.png");
img2 = dup(img1);
printf("BytesPerPixel (img1): %d\n", img1->format->BytesPerPixel);
printf("BytesPerPixel (img2): %d\n", img2->format->BytesPerPixel);
return 0;
}
Running this program with a test.png of 24-bit depth gives me
BytesPerPixel (img1): 3
BytesPerPixel (img2): 1
So I should copy PixelFormat into the new surface to get it right.
And do checks for a palettized image for which the palette should be copied also, perhaps.
Is this right?
And why this function does not create a PixelFormat, while the SDL_CreateRGBSurface do?
I’m using SDL 1.2.13.
Thanks,
tano