SDL2 - Renderer, SDL_BLENDMODE_MOD with alpha

Hello

Is there a possibility for SDL_BLENDMODE_MOD and SDL_SetTextureColorMod to actually ignore pixels from source texture which do have alpha set to 0?

The problem is following:

  1. Load any texture which has totally invisible pixels (alpha set to 0).
  2. Modulate that texture using SDL_SetTextureColorMod with a red color.
  3. Set blend mode of that texture to SDL_BLENDMODE_MOD.

The expected result as far as it goes for me would be modulating only visible pixels of the source texture (alpha > 0) and blending it into the screen. However, the current result is that the whole rectangle is modulated with a red color, even totally invisible pixels resulting in reddish modulation of the whole rectangle instead of only visible pixels.

I can solve that blending issue by using a custom blending mode such as:

SDL_ComposeCustomBlendMode(SDL_BLENDFACTOR_DST_COLOR, SDL_BLENDFACTOR_ONE_MINUS_SRC_ALPHA, SDL_BLENDOPERATION_ADD, SDL_BLENDFACTOR_DST_COLOR, SDL_BLENDFACTOR_ONE_MINUS_SRC_ALPHA, SDL_BLENDOPERATION_ADD);

However, that does not solve my problem as custom blend modes are only supported by hardware renderers while I wish to use the software one as well.

I did try to solve this issue by using a render target to which I first draw a base texture and then color modulated mask texture. This one actually works with one mask. I can easily draw that render target into the screen with SDL_BLENDMODE_BLEND later on, however, as soon as I try to modulate next masks within render target the base pixels and the previously modulated mask become black on every renderer.

Simplifying what I want to achieve with OpenGL:

This is currently what SDL_BLENDMODE_MOD does:

glBlendFunc(GL_DST_COLOR, GL_ZERO);

What I want to achieve is:

glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);

So pretty much exactly the same, but using alpha as well. It is quite simple operation for software renderer as well, but I don’t know whether it’s possible to achieve. That being case adding one extra blending mode such as SDL_BLENDMODE_MUL (for multiply) wouldn’t be a bad idea. This kind of blending is useful:

Background:

Blended texture:

Result for SDL_BLENDMODE_MOD:

Result for possible SDL_BLENDMODE_MUL:

In which case would it not be preferable, and more flexible, to solve the problem by adding support for SDL_ComposeCustomBlendMode() to the software renderer?

It would be, however, that is unlikely to happen since implementing custom blending factors into the software renderer (which is supposed to be as fast as possible) might be really difficult. Also, keep in mind we have things like PSP renderer as well and they do not support fancy blending modes.

That being the case I’m almost done with implementing SDL_BLENDMODE_MUL as suggested within my previous post. It works on all renderers as it was rather trivial to add (time consuming though).

I ran into this problem too, and also used a copy of the texture as an intermediary target to draw the mod mask and other effects on. I thought this was by design, since the wiki says SDL_BLENDMODE_MOD doesn’t use srcA when it calculates dstRGB.

Anyway, a key to this workaround is to start every draw call by drawing the original image to the intermediary target without any blending. Otherwise the mod accumulates on the render target every time you draw and the colors converge to 0.

SDL_SetRenderTarget(Renderer, TargetTexture);

    SDL_SetTextureBlendMode(OriginalTexture, SDL_BLENDMODE_BLEND);
    SDL_RenderCopy(Renderer, OriginalTexture, NULL, NULL);

    SDL_SetTextureBlendMode(ModTexture, SDL_BLENDMODE_MOD);
    SDL_SetTextureColorMod(ModTexture, 255, 128, 64);
    SDL_RenderCopy(Renderer, ModTexture, NULL, NULL);

    SDL_SetTextureBlendMode(ModTexture, SDL_BLENDMODE_MOD);
    SDL_SetTextureColorMod(ModTexture, 128, 64, 32);
    SDL_RenderCopy(Renderer, ModTexture, NULL, NULL);

SDL_SetRenderTarget(Renderer, NULL);

SDL_RenderCopy(Renderer, TargetTexture, NULL, NULL);