Copy texture to texture gives a gray alpha channel

Hi,

I try to draw a texture to another texture and I get a gray color in place of the alpha = 0

I tried using blend mode all over the place but this gets confusing

thanks for your help

void DrawTextureToTexture(SDL_Renderer* renderer, SDL_Texture* srcTexture, SDL_Texture* dstTexture, int x, int y) {
    if (!renderer || !srcTexture || !dstTexture) {
        SDL_Log("Invalid arguments to DrawTextureToTexture");
        return;
    }

    // Get source texture size
    float srcW, srcH;
    if (!SDL_GetTextureSize(srcTexture, &srcW, &srcH)) {
        SDL_Log("Failed to get texture size: %s", SDL_GetError());
        return;
    }

    int w = static_cast<int>(srcW);
    int h = static_cast<int>(srcH);

    // Create temporary render target texture
    SDL_Texture* temp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, w, h);
    if (!temp) {
        SDL_Log("Failed to create temp texture: %s", SDL_GetError());
        return;
    }

    SDL_SetTextureBlendMode(srcTexture, SDL_BLENDMODE_BLEND);
    SDL_SetTextureBlendMode(temp, SDL_BLENDMODE_BLEND);
    SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_NONE);
    
    // Render srcTexture into temp texture
    SDL_Texture* originalTarget = SDL_GetRenderTarget(renderer);
    SDL_SetRenderTarget(renderer, temp);
    SDL_SetRenderDrawColor(renderer, 0,0,0,0);
    SDL_RenderClear(renderer);  // Clear the current rendering target with the drawing color.
    SDL_RenderTexture(renderer, srcTexture, nullptr, nullptr); // Copy a portion of the texture to the current rendering target at subpixel precision
    SDL_SetRenderTarget(renderer, originalTarget);

    // Read pixels from temp texture into surface
    SDL_SetRenderTarget(renderer, temp);
    SDL_Surface* rawSurface = SDL_RenderReadPixels(renderer, nullptr); // Read pixels from temp
    SDL_SetRenderTarget(renderer, originalTarget);

    if (!rawSurface) {
        SDL_Log("Failed to read pixels: %s", SDL_GetError());
        SDL_DestroyTexture(temp);
        return;
    }

    // Convert surface to known RGBA32 format
    SDL_Surface* surface = SDL_ConvertSurface(rawSurface, SDL_PIXELFORMAT_RGBA8888);
    SDL_DestroySurface(rawSurface);
    if (!surface) {
        SDL_Log("Failed to convert surface to RGBA32: %s", SDL_GetError());
        SDL_DestroyTexture(temp);
        return;
    }

    // Manually swap R/B channels if needed (assume BGRA to RGBA)
    Uint8* srcPixels = static_cast<Uint8*>(surface->pixels);
   
    SDL_SetTextureBlendMode(dstTexture, SDL_BLENDMODE_BLEND);
    SDL_SetTextureAlphaMod(dstTexture, 255);
    // Lock destination texture
    void* dstPixels = nullptr;
    int dstPitch = 0;
    if (!SDL_LockTexture(dstTexture, nullptr, &dstPixels, &dstPitch)) {
        SDL_Log("Failed to lock dstTexture: %s", SDL_GetError());
        SDL_DestroySurface(surface);
        SDL_DestroyTexture(temp);
        return;
    }

    // Copy to destination
    Uint8* dstBytes = static_cast<Uint8*>(dstPixels);
    for (int row = 0; row < h; ++row) {
        memcpy(
            dstBytes + (y + row) * dstPitch + x * 4,
            srcPixels + row * surface->pitch,
            w * 4
        );
    }

    SDL_UnlockTexture(dstTexture);
    SDL_DestroySurface(surface);
    SDL_DestroyTexture(temp);
}

I’m not sure what the problem is but whenever I get these kind of problems I create a new project and new texture and check a simple implementation. My last two mistakes were forgetting to divide to make a float value 0 to 1 and accidentally setting the wrong texture in the case where I wanted solid colors (I forgot to set texture to null, the problem is usually the other way around)

FYI there’s a function to create a texture from a surface. SDL_CreateTextureFromSurface(SDL_Renderer *renderer, SDL_Surface *surface)

it’s a new texture and a new project

and I don’t set any alpha value

what I really don’t understand is why is it so hard for an SLD v3 api to do such a simple thing as bliting an image onto another

It is simple

it’s a new texture and a new project

That’s a lot of code. Maybe that’s your problem. You either don’t understand it or the API

You can see how little is required in this thread. idle not at 0% and here’s another one that uses SDL_CreateTextureFromSurface and SDL_RenderCopy to put the texture on screen. But that code is SDL2 SDL_RenderGeometry slower than SDL_RenderCopy?

I refer to this page so often that I use bookmarked the my offline version (online site doesn’t like you opening many pages at once) SDL3/APIByCategory - SDL Wiki

indeed that is a lot of code, copilot/chatgpt get me carried away, so…

I been working on sdl2 projects years ago, and I am pretty sure I struggled with this in the past

once again, sdl fails to provide functions to do the simplest things, there should be a dedicated function for this

I mean , copy a texture to a texture, really no dedicated function ?

SDL_CreateTextureFromSurface will just do what I do manually, I’ll give it a try anyway

SDL2 and 3 are similar. You definitely don’t need a lot of code for either. I’m a little surprised llms got carried away only because there’s SO MUCH sdl code out there. Don’t use SDL_SetRenderTarget, I only ever use that when I’m taking a screenshot. Thats to render a texture to another texture instead of the screen. Most of the code you pasted isn’t necessary

You mostly need an init, create window, renderer, get/poll events, and clear+rendercopy+present to draw to screen. Maybe start with fillrect instead of rendercopy because you wont need a texture

Strange advice. The use of the SDL_SetRenderTarget function is not a matter of taste, but of necessity. If rendering a game frame must be done in many intermediate steps, which cannot be done on a single target (e.g. swapchain texture), it is done on several auxiliary textures and then combined into a whole. And since we have several target textures, this function should be used to change the render target.


In my engine I use many auxiliary textures to render different elements of the window. The game frame is rendered on a small target texture (314Ă—224 pixels) and its rendering requires several more auxiliary textures (e.g. the world is to be rendered recursively, using several dynamically created viewports).

The game frame prepared in this way is then rendered on the entire surface of the window, with special effects to create a background for the window title bar, status bar and filling empty spaces of the window client, resulting from the difference in the proportions of the window and the frame. Next, the same game frame is rendered in the client area of ​​the window, also with special effects, but this time filling only the client area adjusted in proportions to the proportions of the game frame. Finally, window elements are rendered, such as the content of the title bar (icon, title, system buttons) and the content of the status bar (engine performance data, such as framerate, frameload) — this time directly on the window texture.

If I were to avoid multiple targets and render everything directly to the window, I would have to render each game frame twice from scratch (once for the window background, once for the client itself), reducing rendering performance by a factor of two. And there are some things I wouldn’t be able to render at all (e.g. recursively rendering reflections from reflective surfaces).


In summary, tools should be selected based on project requirements, not other people’s opinions.

1 Like

Well, yes, but I mean from the code he’s asking about (rendering a texture on screen) it doesn’t appear to be his goal or a usecase that requires it

I would have to render each game frame twice from scratch

I’m slightly curious, why not generate two sets of geometry? I haven’t measured how long rendering to a texture text is. I imagine more geometry is less work than building a texture and drawing it? Is the texture generated every frame?

Since I’m bumping the thread anyway, if you’re open to using SDL3 there’s a few examples. It’s using a callback because some OSes prefer it SDL/examples/demo/01-snake at release-3.2.x · libsdl-org/SDL · GitHub

This is why it’s a good idea to write and understand the code, especially when you’re learning, instead of asking ChatGPT to do it for you.

If you want to copy one texture onto another (as opposed to just replacing one region of a texture with another) then use render targets.

SDL_Renderer was originally just created as an “easy mode” wrapper around OpenGL 1.x for doing hardware-accelerated 2D sprites etc. Ancient OpenGL didn’t have a function to copy from one texture to another, and I don’t think later versions ever got it either.

Modern GPU APIs have blit commands that can copy from one texture to another, but AFAIK they just replace whatever was in the destination rectangle with the source rectangle (in other words, no alpha blending or any of that).

What geometry? I use SDL 2D renderer, I don’t use any geometry — only simple sprites.

Rendering is always done to a texture, because the window also has its own texture. When the render target is set to null, rendering is done to the window texture, and if to something else, then to an additional texture, unrelated to the window.

Yes, it’s a game after all, so every frame is rendered from scratch, because each one has different content (even if slightly, still different).


Using SDL’s 2D renderer, there is no problem at all to paint one texture on another, including the alpha channel. Just use SDL_BLENDMODE_BLEND or other modes, just not SDL_BLENDMODE_NONE.

What we see in the screenshot provided by the OP looks more like a problem with premultiplied alpha (or lack thereof) or a format conversion flaw.

Yeah, I told the OP to use a render target if he wants to “copy” one texture onto another with alpha blending.

The OP was asking why SDL doesn’t have some sort of explicit SDL_RenderCopyTextureOntoTexture() function. I was explaining there’s no single function call to do that in SDL_Renderer because none of the underlying APIs have a single function call to do it either. You can use blit commands, but they don’t preserve the existing destination pixels.

edit: looked at the “OP’s” code (aka ChatGPT’s code) and you 100% do not need to be copying the texture to main memory, doing a pixel format conversion, making an extra copy, then uploading that copy back to the GPU.

If you want to copy a texture onto another:

  1. Create a texture with the render target flag, put your bottom image in there, blending enabled.
  2. Create a texture that will go on top, put your top image in it.
  3. Set the destination texture as the render target.
  4. Draw the top image on it (don’t call SDL_RenderClear()) with blending enabled.
  5. Set the render target back to whatever it was before.

The GPU will handle pixel format conversion for you.

If you want to create a new texture that has one texture drawn on top of another

  1. Create an empty texture with the render target flag and blending enabled.
  2. Set it as the render target.
  3. Call SDL_RenderClear() on it and then draw your two textures.
  4. Switch back to the original render target.
1 Like

It’s fun (for me) SDL3/SDL_RenderGeometry - SDL Wiki

When I’m rendering text and get to the end of the width, I can rewind to the start of the word and change the vertices to start at a new x,y without looking up the letter information (like where in the atlas it is and how wide the letter is)

lol… that one I need to frame and hang on a wall

btw I know I made this happen correctly with SDL2; having to redo everything again just makes me so depressed

ImGui needs texture, well I need to paste textures in these bloody textures, we should not even have this conversation, there should have a native way to do this and an example in the repository

sorry to go vinegar on this, but I am sick and tired of 30 years of swallowing technology, that ends up bringing new issues, instead of solving them

I don’t understand your code at all

why do you set the rendertarget back to the original texture and then immediately set it back to the temp texture

Between that, creating a temp texture, using a surface, and not knowing basic texture calls, I assumed he thought drawing to a texture is how to draw to screen. The code and question is a disaster

1 Like

What does ImGui have to do with this?

Maybe it would help if you told us what you were actually trying to do?

The OP used Copilot and ChatGPT to write it.

I’m sorry @phil1234567819841 but drawing one texture on top of another is beginner stuff. I gave you two ways to do it. Instead of turning to “AI” you should try to understand the concepts yourself. The basics of using SDL_Renderer have not changed much between SDL2 and SDL3.

2 Likes

Don’t worry, I know this function and similar, but it still doesn’t change the fact that rendering sprites doesn’t require touching their geometry.

Also for text — I use my own string format (8-bit codepoints + 8-bit tags, 65k available characters) and my own font format, so rendering text is just painting glyphs from the atlas, each character with O(1) access, because each character in the string is an index for the lookup table with character data. An example of what the text renderer can do — one string rendered in a given area (screenshot and clunky GIF).


Maybe instead of comparing dicks, let’s focus on helping the OP. :wink: