How to properly use SDL_MapRGB and SDL_LockTexture?

I have the following code to modify the raw pixels data from a texture:

SDL_Texture *texture= /** Already created along with renderer */;
Uint32 format = 0;
void *pixels = NULL;
int pitch = 0;
Uint32 color = 0;

SDL_QueryTexture(texture, &format , NULL, NULL, NULL);
SDL_PixelFormat *pf = SDL_AllocFormat(format);

color = SDL_MapRGB(pf, 63, 126, 233);

SDL_LockTexture(texture, NULL, &pixels, &pitch);

/** Use color to modify pixels? */

SDL_UnlockTexture(texture);

Now how am I supposed to modify the pixels properly? SDL gives me a void pointer, am I supposed to cast it to some other type? How many pixels do I have, is it width * height pixels?

Yeah, cast it to your actual type. Size will be height * pitch. Be aware that pitch is in bytes, not pixels, and might be padded out to accommodate GPU alignment requirements.

Also be aware that the memory SDL_LockTexture() gives you should be treated as write-only. It isn’t guaranteed to contain the previous contents of the texture, so you can’t just set a few pixels; you need to replace the entire texture region that you’re locking.

Thanks for the reply. The problem is I don’t know the actual type of the pixels. I use SDL_RendererInfo to return an array with the available pixel formats, and then use the first element of this array along with SDL_CreateTexture to create the texture. Am I doing it the wrong way?

I have seen some code examples where it explicitly choose a pixel format when creating the texture, like SDL_PIXELFORMAT_RGBA8888. Doesn’t it imply a worse performance, as I am forcing a pixel format instead of use one provided by the renderer?

Small update: I tried forcing a random pixel format, SDL_PIXELFORMAT_INDEX8 for instance, and received the following error message when calling SDL_CreateTexture: “Palettized textures are not supported”. So I decided to dig into SDL2 code and discovered a function called “IsSupportedFormat” in SDL_render.c. this function basically checks if the renderer supports the given pixel format before creating the texture. So the proper way create a texture is to first query the supported formats by the renderer and choose one of them?

The renderer doesn’t have something like a “preferred” format. If you were using SDL_Surface it’d be a different story, and in that case you would want to use the same format as the window surface. But for SDL_Renderer there isn’t such a thing, generally speaking.

For drawing with the GPU, there can be performance considerations for various pixel formats when doing 3D rendering, but for a 2D game it isn’t going to matter a whole lot unless you get into stuff like DXTC/S3 textures (which SDL doesn’t support).

Since you’re essentially using the CPU to draw something and then using the GPU as a hardware blitter, choose a pixel format you can draw with the CPU quickly. RGBA32 is what I’d use, even if you don’t care about the alpha channel. Because you’ll know the format ahead of time, you can skip calling SDL_MapRGBA() (calling it for every pixel will be slow). Have whatever you’re getting the pixels/colors from also be RGBA32, and you can just blast 'em in there as uint32_ts and not have to mess with the individual color channels (unless you want to).

For instance (untested, but you get the idea):

SDL_PixelFormat *rgba32 = SDL_AllocFormat(SDL_PIXELFORMAT_RGBA32);
uint32_t black = SDL_MapRGBA(rgba32, 0, 0, 0, 255);
uint32_t white = SDL_MapRGBA(rgba32, 255, 255, 255, 255);
...
uint32_t *pixels = NULL;
int pitch;
if(SDL_LockTexture(myTexture, NULL, &pixels, &pitch) == 0) {
    int pixelsPerRow = pitch / sizeof(uint32_t);

    // Clear texture and draw a white pixel
    SDL_memset4(pixels, black, (myTextureHeight * pitch / sizeof(uint32_t)));
    int x = 100;
    int y = 75;
    pixels[(y * pixelsPerRow) + x] = white;

    SDL_UnlockTexture(myTexture);
}

edit: or, for something where you wanted to have an easier time manipulating the color channels

typedef struct {
    uint8_t r, g, b, a;
} pixel_t;
...
// myTexture is RGBA32 pixel format
pixel_t clearColor = { 0, 127, 255, 255 };
pixel_t *pixels = NULL;
int pitch;
if(SDL_LockTexture(myTexture, NULL, &pixels, &pitch) == 0) {
    int pixelsPerRow = pitch / sizeof(pixel_t);

    // Clear texture
    SDL_memset4(pixels, (uint32_t)clearColor, (myTextureHeight * pitch / sizeof(pixel_t)));

    // Wheee!
    for(int y = 0; y < 256; y++) {
        for(int x = 0; x < 256; x++) {
            pixels[(y * pixelsPerRow) + x] = { x, y, 0, 255 };
        }
    }

    SDL_UnlockTexture(myTexture);
}

Nice, the idea then is to choose SDL_PIXELFORMAT_RGBA32 so that I can treat the pixels as a int pointer. Thanks for the help!