Mini code sample for SDL2 256-color palette

I am attempting to put together a minimal code sample to show how to work with palette. However the window content remain black.

#include <SDL.h>
int main() {
int w = 320;
int h = 200;

SDL_Init(SDL_INIT_VIDEO);
SDL_Window *window = SDL_CreateWindow(“Foo”, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, w, h, 0);
SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, 0);
SDL_Surface *surface = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 8, 0, 0, 0, 0);
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_INDEX8, SDL_TEXTUREACCESS_STREAMING, w, h);

SDL_Color colors[2] = {{255,0,0,255}, {0,255,0,255}};
SDL_SetPaletteColors(surface->format->palette, colors, 0, 2);
while (true) {
SDL_Event e;
while (SDL_PollEvent(&e) > 0) {
switch (e.type) {
case SDL_QUIT:
return EXIT_SUCCESS;
}
}
uint8_t offscreen[w * h];
for(uint8_t* p = offscreen; p != &offscreen[w * h] ; p+=2) {
p[0] = 0x0; p[1] = 0x01;
}
SDL_UpdateTexture(texture, NULL, offscreen, w);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
SDL_Delay(100);
}
}

Does the renderer support paletted textures?

Once you’ve created your renderer, do:

SDL_RendererInfo info = {};
if(SDL_GetRenderDriverInfo(renderer, &info) == 0) {
    bool supported = false;
    for(int i = 0; i < info.num_texture_formats; ++i) {
        if(info.texture_formats[i] == SDL_PIXELFORMAT_INDEX8) {
            supported = true;
            break;
        }
    }
    printf("Supported: %s\n", supported ? "YES" : "NO");
}

to find out

Also, I don’t see where you’re setting this color palette in any way that affects the texture. You set it on a surface that has no connection to the texture, and you aren’t doing it until after the texture is created (so even if the underlying API and GPU support it, and even if the texture was created from the surface, it’s probably too late to change it).

metal -> Supported: NO
opengl -> Supported: NO
opengles2 -> Supported: NO
software -> Supported: NO

I must have misunderstood how SDL2 works. I assumed I would be able to create an array of byte containing indexes and SDL2 would take care of converting this to Color via a palette and then upload that to the GPU.

Can you confirm, I need to convert my indexes bitmap to a supported format of the renderer like RGBA and then update the texture manually?

If you have an SDL_surface that uses paletted color, SDL may be able to convert it to something the hardware supports when the texture is created (assuming you’ve already set the surface’s palette and create the texture with SDL_CreateTextureFromSurface()).

If the texture creation itself is successful (check for this!), in your above code you can query the texture format you actually got with something like:

// after you create the texture
uint32_t format = SDL_PIXELFORMAT_UNKNOWN;
SDL_QueryTexture(texture, &format, NULL, NULL, NULL);
printf("Texture format: %s\n", SDL_GetPixelFormatName(format);

But basically paletted texture formats are a thing of the past as far as modern GPUs and APIs are concerned.

Also, on my machine (macOS 10.15), trying to create a streaming texture with pixel format SDL_PIXELFORMAT_INDEX8 fails.

You don’t have to use SDL_Render though.
If you do unaccelerated rendering, indexed surfaces are supported.
See https://wiki.libsdl.org/SDL_GetWindowSurface (it has a code example for how to blit a surface into the window surface) and https://wiki.libsdl.org/SDL_Surface (has minimal example on updating the pixels of a surface)

Or you still do the operations on a surface but create a new SDL_Texture from it each frame and render that (which would allow stretching the whole thing to a bigger window size and making the single pixels more visible)

Awesome. Dropping the Renderer in favor of a Surface is working well. Here is the code sample for future reference:

#include <SDL.h>
int main() {
int w = 320; int h = 200;

SDL_Init(SDL_INIT_VIDEO);
SDL_Window window = SDL_CreateWindow(“Foo”, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, w, h, 0);
SDL_Surface
screen = SDL_GetWindowSurface(window);
SDL_Surface* surface = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 8, 0, 0, 0, 0);

SDL_Color colors[2] = {{0,0,255,255}, {255,0,0,255}};
SDL_SetPaletteColors(surface->format->palette, colors, 0, 2);
while (true) {
SDL_Event e;
while (SDL_PollEvent(&e) > 0) {
switch (e.type) {
case SDL_QUIT:
return EXIT_SUCCESS;
}
}
uint8_t* offscreen = (uint8_t )surface->pixels;
for(int i = 0 ; i < h ; i++) {
for(int j=0 ; j < w/2 ; j++) {
offscreen[j
2+0] = 0;
offscreen[j*2+1] = 1;
}
offscreen += surface->pitch;
}
SDL_BlitSurface(surface, NULL, screen, NULL);
SDL_UpdateWindowSurface(window);
SDL_Delay(100);
}
}

Thank you so much for your help guys! You were very helpful. I am going to improve the sample further to use a Texture so it can be stretched now.

Here is the code sample with Texture. Notice that I only had to create the window twice as big and the renderer did everything automatically. This API is awesome :slight_smile: !

Code for future references:

#include <SDL.h>
int main() {
int w = 320; int h = 200;

SDL_Init(SDL_INIT_VIDEO);
SDL_Window window = SDL_CreateWindow(“Foo”, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, w*2, h*2, 0);
SDL_Renderer
renderer = SDL_CreateRenderer(window, -1, 0);
SDL_Surface* surface = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 8, 0, 0, 0, 0);

SDL_Color colors[2] = {{0,0,255,255}, {255,0,0,255}};
SDL_SetPaletteColors(surface->format->palette, colors, 0, 2);
while (true) {
SDL_Event e;
while (SDL_PollEvent(&e) > 0) {
switch (e.type) {
case SDL_QUIT:
return EXIT_SUCCESS;
}
}
uint8_t* offscreen = (uint8_t )surface->pixels;
for(int i = 0 ; i < h ; i++) {
for(int j=0 ; j < w/2 ; j++) {
offscreen[j
2+0] = 0;
offscreen[j*2+1] = 1;
}
offscreen += surface->pitch;
}

SDL_Texture* texture = SDL_CreateTextureFromSurface(renderer, surface);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
SDL_Delay(100);

}
}

Great that it works now! :slight_smile:

Note that in your second example you’re leaking textures - you’re calling SDL_CreateTextureFromSurface() each frame but never call SDL_DestroyTexture()

I came back to write about SDL_DestroyTexture but you beat me to it!! Thanks again Daniel.

1 Like

Be aware that creating a new texture every frame is probably going to be really slow.

I don’t think it matters.
I think it’s just supposed to demonstrate how palettes work and doesn’t need super high performance?
Also, creating just one texture per frame shouldn’t be a problem anyway.
(For example, Quake2 already generated one OpenGL texture per frame when displaying videos, and that was back in 1997)

It is not made to display more than 640x480. Right now the 320x200 version runs fine.

How would you guys suggest to improve performances?

PS: I don’ think Quake2 uploaded a full screen OpenGL texture every frame (although it did update the lightmap data in the lightmap texture).

Q2 didn’t always do that, only when displaying videos (like that id logo at start, the intro video when starting a new game or the videos between missions), see https://github.com/id-Software/Quake-2/blob/master/ref_gl/gl_draw.c#L364

To improve performance (if it really turns out performance sucks) you’d have to convert the paletted image to RGBX (whatever format the SDL_Texture supports and uses) each frame and update the SDL_Texture with that, like you originally tried

That Q2 example is just uploading new texture data, not creating a new texture every frame. A small nitpick, I guess, but there’s often (usually?) an optimized path for replacing the contents of an existing texture, especially if that texture was created with a flag that it’s going to be changed often.

edit: in any case, if it works and is fast enough then great!