There seem to be several annoying issues (possibily
design oversights?) when using SDL with OpenGL. I’ve
been working on a general SDL/OpenGL image preparer
library trying to support my own projects, and these
are some of the things that have bitten me.
First, the image coordinate systems seemed to be
inverted. In OpenGL, if you imagine your 0,0
coordinate to be in the lower left corner of your
monitor, increasing the y value would move up the
screen, whereas I believe SDL_Surfaces go the other
direction. You can either flip your SDL_Surface
somehow (swapping the correct bits around in the
surface is the most direct way). Or you can adopt the
upside-down convention in your OpenGL code and change
your vertex coordinates so you don’t have to do
anything to the image itself. This might annoy some
OpenGL programmers if they ever have to go through
your code though. If you flip the surface, watch out
if you try rotating the surface 180 degrees. The
horizontal coordinates are correct, so if you rotate,
your x values will then be in the wrong direction.
Second, image formats are not necessarily OpenGL
ready. The easiest thing to do (and the most common
thing) is to use RGB or RGBA formats. However, SDL
surfaces are not always in this format. Without the
SDL_image library, SDL gives you only BMP support.
Looking at just BMP for moment, typically, BMPs are
either 8-bit formats or 24-bit formats (no 32-bit).
24-bit formats are the easiest to work with because
each byte represents a red, green, and blue. However,
there is a catch: SDL doesn’t dictate what order the
pixels are in.
So there is a bug in your code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width,
m_height, 0, GL_RGB,
GL_UNSIGNED_BYTE, pBitmap -> pixels);
24bit BMP stores in BGR format, not RGB. (I think
Microsoft did this to be annoying, but I can’t prove
it.) This means that everything that should be red
looks blue, and vice versa. It’s most likely your
channels were swapped, but not actually missing.
So one possible solution is to change the "format"
parameter to GL_BGR:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width,
m_height, 0, GL_BGR,
GL_UNSIGNED_BYTE, pBitmap -> pixels);
However, the BGR format parameter wasn’t added until
OpenGL 1.2. And ironically, Microsoft stopped their
OpenGL support at 1.1, and you will be dependent on
your video card driver making up the difference.
The other option is to either swap the bits (or create
a new SDL_surface that organizes the bits) in RGB
order.
For 8-bit images, your life gets harder because the
image uses palette. I think you will have to manually
decode each pixel to get the RGB values, and then
place them in a new 24 (or 32) bit surface which you
will pass to OpenGL.
If you start using other image formats via SDL_image,
then you need to worry about all kinds of different
ways the images are stored. For example, the JPEG you
tried probably stored in RGB format which is why it
worked for you. PNG also seems to be RGB. But TGA is
BGR. So it’s a gamble. You should probably write your
code to handle any case if you want to use the
SDL_image library. SDL provides color masks and
SDL_GetRGB(A) to help you, but it’s still kind of
annoying to go through and it would have been nice if
SDL had written their Surface format with OpenGL in
mind.
Finally, there is Endianess issue. When you create a
new surface (say to copy to for the correct RGB order
and vertical direction), you need to specify the
correct mask. The mask will depend on the endianess of
your system. If you’re not careful, this could also
cause your Reds and blues to get mixed up. And this
will also affect portability if you ever go to a
system with a different endianess (e.g. PC to Mac). So
do something like this:
#include “SDL_byteorder.h”
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
glimage = SDL_CreateRGBSurface(SDL_SWSURFACE,
sdlimage->w, sdlimage->h, 32,
0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);
#else
glimage = SDL_CreateRGBSurface(SDL_SWSURFACE,
image->w, image->h, 32,
0x000000FF, 0x0000FF00, 0x00FF0000, 0xFF000000);
#endif
One more note, SDL_Surfaces will not help you with
"colorkey" transparency (i.e. your image format
specifies that a certain color represents
transparency). If you want to use colorkey
transparency, you will also have to write this. You
will need to go through every pixel (assuming the SDL
color key flag has been set), and check to see if the
color matches the colorkey. Then you will need to set
the alpha value to 0 yourself in your OpenGL surface.
Remember that for transparency in OpenGL mode, you
probably should be using GL_RGBA (and not GL_RGB) and
make sure you have a 32-bit surface (unless you know
how to use GL_ALPHA or some of the other special
formats). If you don’t want to go to all this trouble,
then just use 32-bit format images with actual
transparency values already set.
-Eric__________________________________________________
Yahoo! - We Remember
9-11: A tribute to the more than 3,000 lives lost
http://dir.remember.yahoo.com/tribute