New video backend & accelerated per pixel alpha problem

Hi,

Writing the glSDL video backend for SDL, we came across a problem. As
Sam promised to help people in such a case, we are asking for help on
this list :slight_smile:

First, let’s explain the context a bit :
OpenGL can accelerate per pixel alpha blits. So we are trying to use
this acceleration for SDL.
But we found that surfaces with an alpha channel lost their alpha
channel during SDL_DisplayFormatAlpha.
We traced the problem, and saw that SDL_DisplayFormatAlpha calls
SDL_ConvertSurface, which in turns calls SDL_CreateRGBSurface to create
the new surface. But inside SDL_CreateRGBSurface there’s the following
(the code is annotated a bit) :

       if ( screen && ((screen->flags&SDL_HWSURFACE) ==

SDL_HWSURFACE) ) {
if ( (flags&(SDL_SRCCOLORKEY|SDL_SRCALPHA)) != 0 ) {
flags |= SDL_HWSURFACE;
[Note that here flags will have SDL_HWSURFACE
set, because our screen is an SDL_HWSURFACE and our surface uses alpha.]
}
if ( (flags & SDL_SRCCOLORKEY) == SDL_SRCCOLORKEY ) {
if ( ! current_video->info.blit_hw_CC ) {
flags &= ~SDL_HWSURFACE;
}
}
if ( (flags & SDL_SRCALPHA) == SDL_SRCALPHA ) {
if ( ! current_video->info.blit_hw_A ) {
flags &= ~SDL_HWSURFACE;
}
}
[Note that here the SDL_HWSURFACE flags isn’t removed
because the glSDL backend supports hardware alpha blits, ie blit_hw_A=1]
} else {
flags &= ~SDL_HWSURFACE;
}

       /* Allocate the surface */
       surface = (SDL_Surface *)malloc(sizeof(*surface));
       if ( surface == NULL ) {
               SDL_OutOfMemory();
               return(NULL);
       }
       surface->flags = SDL_SWSURFACE;
       if ( (flags & SDL_HWSURFACE) == SDL_HWSURFACE ) {
               [Note that this path is taken, since SDL_HWSURFACE is

set in flags]
depth = screen->format->BitsPerPixel;
Rmask = screen->format->Rmask;
Gmask = screen->format->Gmask;
Bmask = screen->format->Bmask;
Amask = screen->format->Amask;
}

Here is the interesting point : if the video backend can accelerate per
pixel alpha surfaces, and if the application calls
SDL_DisplayFormatAlpha on those surfaces, the video format will be used.
Now if the video format doesn’t have an alpha channel (like with 15, 16
or 24 bpp) the alpha channel will totally disappear during
SDL_DisplayFormatAlpha().

So for the question : do we get something wrong, or is a fix needed
somewhere ?

In the latter case, we thought of two solutions :

  • video backends can optionally provide their own SDL_DisplayFormat
    functions, the
    same way as they provide the driver functions
  • video backends can optionnaly provide two different pixel formats used by
    SDL_DisplayFormat, one for surfaces with an alpha channel, and the other for
    surfaces without alpha channel.

Thanks in advance,

The not-so-secret backend team

Stephane Marchesin wrote:

Hi,

Writing the glSDL video backend for SDL, we came across a problem. As
Sam promised to help people in such a case, we are asking for help on
this list :slight_smile:

First, let’s explain the context a bit :
OpenGL can accelerate per pixel alpha blits. So we are trying to use
this acceleration for SDL.
But we found that surfaces with an alpha channel lost their alpha
channel during SDL_DisplayFormatAlpha.
We traced the problem, and saw that SDL_DisplayFormatAlpha calls
SDL_ConvertSurface, which in turns calls SDL_CreateRGBSurface to create
the new surface. But inside SDL_CreateRGBSurface there’s the following
(the code is annotated a bit) :

      if ( screen && ((screen->flags&SDL_HWSURFACE) ==

SDL_HWSURFACE) ) {
if ( (flags&(SDL_SRCCOLORKEY|SDL_SRCALPHA)) != 0 ) {
flags |= SDL_HWSURFACE;
[Note that here flags will have SDL_HWSURFACE
set, because our screen is an SDL_HWSURFACE and our surface uses alpha.]
}
if ( (flags & SDL_SRCCOLORKEY) == SDL_SRCCOLORKEY ) {
if ( ! current_video->info.blit_hw_CC ) {
flags &= ~SDL_HWSURFACE;
}
}
if ( (flags & SDL_SRCALPHA) == SDL_SRCALPHA ) {
if ( ! current_video->info.blit_hw_A ) {
flags &= ~SDL_HWSURFACE;
}
}
[Note that here the SDL_HWSURFACE flags isn’t removed
because the glSDL backend supports hardware alpha blits, ie blit_hw_A=1]
} else {
flags &= ~SDL_HWSURFACE;
}

      /* Allocate the surface */
      surface = (SDL_Surface *)malloc(sizeof(*surface));
      if ( surface == NULL ) {
              SDL_OutOfMemory();
              return(NULL);
      }
      surface->flags = SDL_SWSURFACE;
      if ( (flags & SDL_HWSURFACE) == SDL_HWSURFACE ) {
              [Note that this path is taken, since SDL_HWSURFACE is

set in flags]
depth = screen->format->BitsPerPixel;
Rmask = screen->format->Rmask;
Gmask = screen->format->Gmask;
Bmask = screen->format->Bmask;
Amask = screen->format->Amask;
}

Here is the interesting point : if the video backend can accelerate per
pixel alpha surfaces, and if the application calls
SDL_DisplayFormatAlpha on those surfaces, the video format will be used.
Now if the video format doesn’t have an alpha channel (like with 15, 16
or 24 bpp) the alpha channel will totally disappear during
SDL_DisplayFormatAlpha().

So for the question : do we get something wrong, or is a fix needed
somewhere ?

In the latter case, we thought of two solutions :

  • video backends can optionally provide their own SDL_DisplayFormat
    functions, the
    same way as they provide the driver functions
  • video backends can optionnaly provide two different pixel formats
    used by
    SDL_DisplayFormat, one for surfaces with an alpha channel, and the
    other for
    surfaces without alpha channel.

Thanks in advance,

The not-so-secret backend team

Hi,

I have thought about this problem a little more, and I think the best
way to solve it would be adding two members to the SDL_VideoDevice
struct, like :
SDL_PixelFormat* pixel_displayformat;
SDL_PixelFormat* pixel_displayformatalpha;

Those two formats would describe the requested format to be used by the
SDL_CreateRGBSurface function when a HWSURFACE is requested, and could
be NULL, in which case the video surface format would be used.

Any opinions/suggestions on this ?

Stephane