SDL_DisplayFormatAlpha() and 16-bit surfaces

Hi everybody,
Working off some help I received on this list earlier, I’ve been trying
to read a PNG w/ an alpha channel, and then read the pixels off that
surface and put them on a 16-bit surface. The source works correctly w/
32-bit surfaces (thanks to help I received here earlier, thanks!) but
I’d like the “final” image, the surface pointer “backbuffer” to be a
16-bit image, unfortunately, SDL_DisplayFormatAlpha() only makes a
32-bit surface …
So, if I change line 53 from 32 bytes per pixel to 16, the backbuffer
image (when blitted) simply doesn’t appear at all. How can I get that
ship (the sample.png image, which I posted at
chris.luethy.net/sample.png) image copied onto a 16bpp surface, instead
of a 32bpp surface?

(sample.png at chris.luethy.net/sample.png)

#include <stdio.h>
#include <stdlib.h>

#include “SDL.h”
#include “SDL_image.h”

inline Uint32 raw_getpixel(SDL_Surface surface, int x, int y) {
return (
(Uint32
)((((Uint8)surface->pixels)+(ysurface->pitch)+(xsurface->format->BytesPerPixel))));
}

inline void raw_putpixel(SDL_Surface surface, int x, int y, Uint32
pixel) {
(Uint32)((((Uint8
)surface->pixels)+(ysurface->pitch)+(xsurface->format->BytesPerPixel))) = pixel;
}

int main(int argc, char *argv[])
{
int i, j;
SDL_Surface *screen, *temp, *sample, *backbuffer;
SDL_Rect src, dest;

if (SDL_Init(SDL_INIT_VIDEO) != 0)
{
	fprintf(stderr, "Could not init SDL: %s\n", SDL_GetError());
	return (-1);
}

atexit(SDL_Quit);

screen = SDL_SetVideoMode(320, 200, 16, 0);
if (screen == NULL)
{
	fprintf(stderr, "Could not set video mode.\n");
	return (-1);
}

printf("[SCREEN    ] w=%d, h=%d,

bbp=%d\n",screen->w,screen->h,screen->format->BitsPerPixel);

temp = IMG_Load("sample.png");
if (temp == NULL)
{
	fprintf(stderr, "Could not load sample.png");
	return (-1);
}

printf("[TEMP      ] w=%d, h=%d,

bbp=%d\n",temp->w,temp->h,temp->format->BitsPerPixel);

sample = SDL_DisplayFormatAlpha(temp); 
SDL_FreeSurface(temp);

printf("[SAMPLE    ] w=%d, h=%d,

bbp=%d\n",sample->w,sample->h,sample->format->BitsPerPixel);

backbuffer = SDL_CreateRGBSurface(SDL_SWSURFACE, sample->w, sample->h,
								  32,
								  sample->format->Rmask, sample->format->Gmask,
								  sample->format->Bmask, sample->format->Amask);

printf("[BACKBUFFER] w=%d, h=%d,

bbp=%d\n",backbuffer->w,backbuffer->h,backbuffer->format->BitsPerPixel);

SDL_LockSurface(sample);
SDL_LockSurface(backbuffer);
for (j = 0; j < sample->h; j++)
{
	for (i = 0; i < sample->w; i++)
	{
		Uint32 color;
		SDL_Rect src;
		color = raw_getpixel(sample, i, j);
		raw_putpixel(backbuffer, i, j, color);
	}
}
SDL_UnlockSurface(backbuffer);
SDL_UnlockSurface(sample);

dest.x = 0;
dest.y = 0;
dest.w = 320;
dest.h = 200;
SDL_FillRect(screen, &dest, SDL_MapRGB(screen->format, 0, 0, 255));

src.x = 0;
src.y = 0;
src.w = backbuffer->w;
src.h = backbuffer->h;
dest = src;
SDL_BlitSurface(backbuffer, &src, screen, &dest);

SDL_Flip(screen);

SDL_Delay(3000);

SDL_FreeSurface(sample);
SDL_FreeSurface(backbuffer);

return (0);

}

– chris (@Christopher_Thielen)

I’d like the “final” image, the surface pointer “backbuffer” to be a
16-bit image, unfortunately, SDL_DisplayFormatAlpha() only makes a
32-bit surface …

It sounds like you want to just do a blit from the final image to the back
buffer. It should blend correctly with the 16 bit surface. Make sure you’re
using the latest code in CVS, since I fixed SDL_DisplayFormatAlpha() last week.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

I grabbed CVS today (8-5-02) and checked out SDL_video.c, but on line
920:

format = SDL_AllocFormat(32, rmask, gmask, bmask, amask);

and this then becomes the format of the surface that is returned by
SDL_DisplayFormatAlpha(). Is there any way to get
SDL_DisplayFormatAlpha() to return a 16bpp image, or are all images with
alpha 32-bit images?

– chris (@Christopher_Thielen)On Mon, 2002-08-05 at 14:59, Sam Lantinga wrote:

It sounds like you want to just do a blit from the final image to the back
buffer. It should blend correctly with the 16 bit surface. Make sure you’re
using the latest code in CVS, since I fixed SDL_DisplayFormatAlpha() last week.

Chris Thielen wrote:

and this then becomes the format of the surface that is returned by
SDL_DisplayFormatAlpha(). Is there any way to get
SDL_DisplayFormatAlpha() to return a 16bpp image, or are all images
with alpha 32-bit images?

I’ve never heard of any format with less than 32 bits have more than 1
bit for alpha, and even that 1 bit was an obscure thing on some 16-bit
modes. After all, if you want alpha as well as the colour, there has to
be somewhere to store it. So I expect the 32 bit restriction is the only
practical option if you want 8 bits of alpha, and for there to be any
chance of acceleration on most targets.–
Kylotan

Chris Thielen wrote:

and this then becomes the format of the surface that is returned by
SDL_DisplayFormatAlpha(). Is there any way to get
SDL_DisplayFormatAlpha() to return a 16bpp image, or are all images
with alpha 32-bit images?

I’ve never heard of any format with less than 32 bits have more than 1
bit for alpha, and even that 1 bit was an obscure thing on some 16-bit
modes. After all, if you want alpha as well as the colour, there has to
be somewhere to store it. So I expect the 32 bit restriction is the only
practical option if you want 8 bits of alpha, and for there to be any
chance of acceleration on most targets.

It would be nice if there were some way of taking a surface with
colour-key alpha, and transforming it to a surface in the display format
with colour-key alpha, rather than a surface in the display format with
8-bits of alpha which are either 0 or 255 for any given pixel - I’m sure
colour-key alpha, especially with RLE encoding, would be faster than
doing proper alpha-blending. If you’re blitting to a 16-bit surface, I’m
sure doing the 32->16 bit conversion is wasteful, too.

Screwtape.On Tuesday, August 6, 2002, at 02:20 , Kylotan wrote:

Chris Thielen wrote:

and this then becomes the format of the surface that is returned by
SDL_DisplayFormatAlpha(). Is there any way to get
SDL_DisplayFormatAlpha() to return a 16bpp image, or are all images
with alpha 32-bit images?

I’ve never heard of any format with less than 32 bits have more than 1
bit for alpha, and even that 1 bit was an obscure thing on some 16-bit
modes. After all, if you want alpha as well as the colour, there has to
be somewhere to store it. So I expect the 32 bit restriction is the only
practical option if you want 8 bits of alpha, and for there to be any
chance of acceleration on most targets.

It would be nice if there were some way of taking a surface with
colour-key alpha, and transforming it to a surface in the display format
with colour-key alpha, rather than a surface in the display format with
8-bits of alpha which are either 0 or 255 for any given pixel - I’m sure
colour-key alpha, especially with RLE encoding, would be faster than
doing proper alpha-blending. If you’re blitting to a 16-bit surface, I’m
sure doing the 32->16 bit conversion is wasteful, too.

Yea, that’s why I’m wondering this, I’d like to avoid 32->16 bit
conversions I’m doing right now. It’s not too big a deal because the
majority of the screen isn’t updated every frame, and not all the
sprites need alpha, so not all are 32-bit, but it’s still a waste to
convert it from 32 to 16 on every blit, I agree.

– chris (@Christopher_Thielen)On Mon, 2002-08-05 at 22:49, Tim Allen wrote:

On Tuesday, August 6, 2002, at 02:20 , Kylotan wrote:

Chris Thielen wrote:> On Mon, 2002-08-05 at 22:49, Tim Allen wrote:

On Tuesday, August 6, 2002, at 02:20 , Kylotan wrote:

Chris Thielen wrote:

and this then becomes the format of the surface that is returned by
SDL_DisplayFormatAlpha(). Is there any way to get
SDL_DisplayFormatAlpha() to return a 16bpp image, or are all images
with alpha 32-bit images?

I’ve never heard of any format with less than 32 bits have more than 1
bit for alpha, and even that 1 bit was an obscure thing on some 16-bit
modes. After all, if you want alpha as well as the colour, there has to
be somewhere to store it. So I expect the 32 bit restriction is the only
practical option if you want 8 bits of alpha, and for there to be any
chance of acceleration on most targets.

It would be nice if there were some way of taking a surface with
colour-key alpha, and transforming it to a surface in the display format
with colour-key alpha, rather than a surface in the display format with
8-bits of alpha which are either 0 or 255 for any given pixel - I’m sure
colour-key alpha, especially with RLE encoding, would be faster than
doing proper alpha-blending. If you’re blitting to a 16-bit surface, I’m
sure doing the 32->16 bit conversion is wasteful, too.

Yea, that’s why I’m wondering this, I’d like to avoid 32->16 bit
conversions I’m doing right now. It’s not too big a deal because the
majority of the screen isn’t updated every frame, and not all the
sprites need alpha, so not all are 32-bit, but it’s still a waste to
convert it from 32 to 16 on every blit, I agree.

– chris (chris at luethy.net)

Are you concerned about this because the screen surface is 16-bit?
But the screen surface doesn’t have alpha, does it?
If it doesn’t then 16-bit RGBA -> 16-bit RGB is just as costly in
conversion.

Chris Thielen wrote:

It would be nice if there were some way of taking a surface with
colour-key alpha, and transforming it to a surface in the display
format with colour-key alpha, rather than a surface in the display
format with 8-bits of alpha which are either 0 or 255 for any given
pixel - I’m sure colour-key alpha, especially with RLE encoding,
would be faster than doing proper alpha-blending. If you’re blitting
to a 16-bit surface, I’m sure doing the 32->16 bit conversion is
wasteful, too.

Yea, that’s why I’m wondering this, I’d like to avoid 32->16 bit
conversions I’m doing right now. It’s not too big a deal because the
majority of the screen isn’t updated every frame, and not all the
sprites need alpha, so not all are 32-bit, but it’s still a waste to
convert it from 32 to 16 on every blit, I agree.
– chris (chris at luethy.net)

Are you concerned about this because the screen surface is 16-bit?
But the screen surface doesn’t have alpha, does it?
If it doesn’t then 16-bit RGBA -> 16-bit RGB is just as costly in
conversion.

But a surface with colour-key alpha is stored as RGB, not RGBA. So if
you take a surface with colour-key alpha and run it through
DisplayFormatAlpha, then you have the RGBA -> RGB conversion as well as
the 32-bit -> 16-bit conversion.

All I really want is 16-bit RGB -> 16-bit RGB (ignoring pixels of a
certain colour).On Tuesday, August 6, 2002, at 05:21 , Adam Gates wrote:

On Mon, 2002-08-05 at 22:49, Tim Allen wrote:

— Chris Thielen wrote:> On Mon, 2002-08-05 at 22:49, Tim Allen wrote:

On Tuesday, August 6, 2002, at 02:20 , Kylotan wrote:

Chris Thielen wrote:

and this then becomes the format of the surface
that is returned by

SDL_DisplayFormatAlpha(). Is there any way to
get

SDL_DisplayFormatAlpha() to return a 16bpp
image, or are all images

with alpha 32-bit images?

I’ve never heard of any format with less than 32
bits have more than 1

bit for alpha, and even that 1 bit was an
obscure thing on some 16-bit

modes. After all, if you want alpha as well as
the colour, there has to

be somewhere to store it. So I expect the 32 bit
restriction is the only

practical option if you want 8 bits of alpha,
and for there to be any

chance of acceleration on most targets.

It would be nice if there were some way of taking
a surface with
colour-key alpha, and transforming it to a surface
in the display format
with colour-key alpha, rather than a surface in
the display format with
8-bits of alpha which are either 0 or 255 for any
given pixel - I’m sure
colour-key alpha, especially with RLE encoding,
would be faster than
doing proper alpha-blending. If you’re blitting to
a 16-bit surface, I’m
sure doing the 32->16 bit conversion is wasteful,
too.

Yea, that’s why I’m wondering this, I’d like to
avoid 32->16 bit
conversions I’m doing right now. It’s not too big a
deal because the
majority of the screen isn’t updated every frame,
and not all the
sprites need alpha, so not all are 32-bit, but it’s
still a waste to
convert it from 32 to 16 on every blit, I agree.

– chris (chris at luethy.net)

Even if there was a 16-bit format WITH alpha, it
couldn’t BY DEFINITION be the same format as any
15/16-bit display format… What you could do instead
is make a custom display format that is (lets say) 21
bit… the lower 16 bit would be in the native 16-bit
format (so that you would not need to do pixel/channel
conversion), and the upper 5 bits would be for alpha.
The problem is, however, that since you ARE using
alpha, that you need to mix each pixel with the one
underneath it, so the conversion is almost implicit…

ON THE OTHER HAND, I believe that SDL’s RLE compressed
15/16-bit surfaces DO support alpha (from what I saw
looking at the code)… The problem is that I’m not
sure to GET alpha data into the 16-bit surface before
RLEing it…

Anyone care to explain using Alpha with 16-bit RLE
surfaces in SDL?

Best wishes to all,

-Loren


Do You Yahoo!?
Yahoo! Health - Feel better, live better

Tim Allen wrote:

All I really want is 16-bit RGB -> 16-bit RGB (ignoring pixels of a
certain colour).

So what’s the problem? Just set the colourkey and leave
SDL_DisplayFormatAlpha alone - that’s what SDL_DisplayFormat is there
for.–
Kylotan

Tim Allen wrote:

All I really want is 16-bit RGB -> 16-bit RGB (ignoring pixels of a
certain colour).

So what’s the problem? Just set the colourkey and leave
SDL_DisplayFormatAlpha alone - that’s what SDL_DisplayFormat is there
for.

Aha, but LoadBMP will not create a 16-bit surface, and once you convert
a surface to 16-bit, it’s not at all guaranteed that your colour-key
will still exist as it’s supposed to.On Wednesday, August 7, 2002, at 04:04 , Kylotan wrote:

Tim Allen wrote:

Aha, but LoadBMP will not create a 16-bit surface, and once you
convert a surface to 16-bit, it’s not at all guaranteed that your
colour-key will still exist as it’s supposed to.

It’s never been a problem for me. This works fine in my experience:

graphic = SDL_LoadBMP(“whatever.bmp”);
Uint32 colourKey = SDL_MapRGB(graphic->format, 0, 255, 255);
SDL_SetColorKey(graphic, SDL_SRCCOLORKEY | SDL_RLEACCEL, colourKey);
SDL_Surface* oldGraphic = graphic;
graphic = SDL_DisplayFormat(graphic);
SDL_FreeSurface(oldGraphic);

That has never failed for me, and I always run in 16-bit mode.–
Kylotan