Indexed image surface to OpenGL texture

How does one load a colour-indexed surface into an OpenGL texture?

I am running OpenGL in standard RGB mode but would like to be able
to load indexed images so that I can modulate the colour index and
produce different textures from the same image (think of the old
console games where they’d do the same to create various 'strengths’
of enemies).

I am using SDL_image and PNG files, for clarification.

thanks
a1

mal content wrote:

How does one load a colour-indexed surface into an OpenGL texture?

I am running OpenGL in standard RGB mode but would like to be able
to load indexed images so that I can modulate the colour index and
produce different textures from the same image (think of the old
console games where they’d do the same to create various 'strengths’
of enemies).

If you’re using GL_TEXTURE_2D, you can just feed the image to OpenGL and
it will automatically be converted to RGB[A]. Define the color table
using glPixelMapfv() and use GL_COLOR_INDEX as the format in glTexImage2D().

If you’re using GL_TEXTURE_RECTANGLE_{NV|ARB|EXT}, that doesn’t work, so
you have to do the conversion from indexed to RGB[A] yourself.

Have a look at
http://cvs.sourceforge.net/viewcvs.py/pipmak/pipmak/source/main.c?view=markup
for an example of how I do it (search for “GL_COLOR_INDEX” and for
"glPixelMapfv").

-Christian

Ah, I see. I’m not completely there but it’s starting to make sense.
I’m just having a bit of trouble getting a picture in my mind of the data
structures.

(I realise I’m veering off slightly into OpenGL specifics, so feel free to
stop reading at this point!).

My precise usage is like this: Each texture in my program is represented
by a structure that contains the filename, the width, height and OpenGL
texture id (this data is filled in upon loading a list of textures).
The structure
also contains a pointer to a colour map (which is non-NULL if the texture
was an indexed texture). The colour map is read upon texture loading
and OpenGL is instructed to use it. (No colour map modulation will be
done at any other point other than the initial texture loading).

In your example, controlColorPalette is an array of colours. These are
somehow mapped to pixelMap[256]. This is the bit that I don’t really
understand. Could you elaborate a bit on how that works?

thanks for the info,
a1On 1/28/06, Christian Walther wrote:

mal content wrote:

How does one load a colour-indexed surface into an OpenGL texture?

I am running OpenGL in standard RGB mode but would like to be able
to load indexed images so that I can modulate the colour index and
produce different textures from the same image (think of the old
console games where they’d do the same to create various 'strengths’
of enemies).

If you’re using GL_TEXTURE_2D, you can just feed the image to OpenGL and
it will automatically be converted to RGB[A]. Define the color table
using glPixelMapfv() and use GL_COLOR_INDEX as the format in glTexImage2D().

If you’re using GL_TEXTURE_RECTANGLE_{NV|ARB|EXT}, that doesn’t work, so
you have to do the conversion from indexed to RGB[A] yourself.

Have a look at
http://cvs.sourceforge.net/viewcvs.py/pipmak/pipmak/source/main.c?view=markup
for an example of how I do it (search for “GL_COLOR_INDEX” and for
"glPixelMapfv").

mal content wrote:

In your example, controlColorPalette is an array of colours. These are
somehow mapped to pixelMap[256]. This is the bit that I don’t really
understand. Could you elaborate a bit on how that works?

The intermediate “pixelMap” array is only there to convert from the way
the colors are stored in controlColorPalette to the way glPixelMapfv()
expects. controlColorPalette is an array of 256 RGB triples (rgbrgbrgb),
while glPixelMapfv() wants an array of the 256 red components in the
first invocation, an array of the blue components in the second etc. I
do it that way because controlColorPalette is also used in other places

  • if glPixelMap() is the only place you use your color tables, just
    store them in the format it expects from the beginning.

    -Christian

mal content wrote:

In your example, controlColorPalette is an array of colours. These are
somehow mapped to pixelMap[256]. This is the bit that I don’t really
understand. Could you elaborate a bit on how that works?

The intermediate “pixelMap” array is only there to convert from the way
the colors are stored in controlColorPalette to the way glPixelMapfv()
expects. controlColorPalette is an array of 256 RGB triples (rgbrgbrgb),
while glPixelMapfv() wants an array of the 256 red components in the
first invocation, an array of the blue components in the second etc. I
do it that way because controlColorPalette is also used in other places

  • if glPixelMap() is the only place you use your color tables, just
    store them in the format it expects from the beginning.

Ah yes, it all makes sense now.

cheers!
a1