Hi everyone,
I’m having problems deciphering the pixel format values that SDL_Image is
returning with the surfaces it creates. I’m trying to supply the pixel data
in these surfaces to OpenGL for texturing, and while I can make this work
without much of a problem, I seem to keep having to choose the OpenGL
texturing “format” and “type” parameters by hand, rather than relying on the
surface’s pixel format to work it out automatically.
Here’s an example of the pixel format returned from two slightly different
images, both in PNG format. The second image is identical to the first apart
from the addition of an alpha channel using the GIMP. Sorry for the info
pulled in from the docs - I just put it in my surface inspection routine to
remind me what I’m looking at.
graded background.png
SDL debug: Surface Flags…
SDL debug: Surface Pixel Format…
Surface dimensions: width=1024 pixels, height=1024 pixels
No palette information.
BitsPerPixel=24
The number of bits used to represent each pixel in a surface. Usually 8, 16,
24 or 32.
BytesPerPixel=3
The number of bytes used to represent each pixel in a surface. Usually one to
four.
Rmask=0xff, Gmask=0xff00, Bmask=0xff0000, Amask=0xff0000
Binary mask used to retrieve individual color values.
Rshift=0x0, Gshift=0x8, Bshift=0x10, Ashift=0x10
Binary left shift of each color component in the pixel value.
Rloss=0x0, Gloss=0x0, Bloss=0x0, Aloss=0x0
Precision loss of each color component (2^[RGBA]loss).
Colourkey=0 - Pixel value of transparent pixels.
Alpha=255 - Overall surface alpha value.
graded background alpha.png
SDL debug: Surface Flags…
SDL_SRCALPHA - Surface blit uses alpha blending
SDL debug: Surface Pixel Format…
Surface dimensions: width=1024 pixels, height=1024 pixels
No palette information.
BitsPerPixel=32
The number of bits used to represent each pixel in a surface. Usually 8, 16,
24 or 32.
BytesPerPixel=4
The number of bytes used to represent each pixel in a surface. Usually one to
four.
Rmask=0xff, Gmask=0xff00, Bmask=0xff0000, Amask=0xff0000
Binary mask used to retrieve individual color values.
Rshift=0x0, Gshift=0x8, Bshift=0x10, Ashift=0x10
Binary left shift of each color component in the pixel value.
Rloss=0x0, Gloss=0x0, Bloss=0x0, Aloss=0x0
Precision loss of each color component (2^[RGBA]loss).
Colourkey=0 - Pixel value of transparent pixels.
Alpha=255 - Overall surface alpha value.
Questions:
-
Why does the graded background.png have no surface flags? Similarly why
isn’t the alpha’d graphic bestowed with a software or hardware surface flag? -
Looking at the surface flags in the header file, why is SWSURFACE zero? How
can I bitwise-and that to see if it is true or not for a surface?
This program is running on an x86, so I realise little-endianness applies
here. But:
-
The masks in graded background.png seem OK for R, G, and B, given the
number of bytes per pixel, but why is the alpha mask the same as the blue
mask, and the alpha shift the same as the blue shift? Shouldn’t the alpha
mask be zero if the surface has no alpha information? -
The masks in graded background alpha.png are the same as for the
non-alpha’d version! How is that possible? Shouldn’t Amask be different from
Bmask, and Ashift from Bshift? And also, to get this surface to texture
correctly, I have to pass it to OpenGL with GL_BGRA and
GL_UNSIGNED_INT_8_8_8_8_REV (which added together is the same as saying
ARGB). That really doesn’t match up with the mask values. Does it?
I’m getting lost here guys - am I missing something or is SDL_Image messing
with my head? Please help!
Thanks,
Giles